TikTok is working on new ways to classify its content and restrict it to certain audiences according to its characteristics and themes, with the aim that minors find greater restrictions in the consumption of videos aimed at adults.
The application seeks to filter and organize videos based on the “maturity of the content and the most comfortable subject areas” for users, according to the head of TikTok Broadcast Policies, Tracy Elizabeth, in statements collected by Engadget.
The directive has indicated that, once this classification system has been implemented, the contents that have been identified as containing adult themes may be restricted to users according to their age to prevent adolescents from accessing harmful information.
On the contrary, for those contents that contain “less mature” topics or open topics and are not specifically intended for adults, the platform’s users will be able to decide whether to view them or omit them.
Also Read: Artificial Intelligence Applications
Elizabeth has indicated that the objective is to offer the possibility to users to choose “the category with which they feel most comfortable” and, although she has only commented that it is a system “in the innovation phase”, it could resemble a rating similar to that used in movies or video games.
Updating Your Policies
These clarifications derive from a recent statement issued by the platform detailing the update of the policies that TikTok carries out to promote safety and well-being on its service.
Regarding the aspect commented by Elizabeth, from the platform, they have indicated that when they find content that may not be appropriate for all audiences, they do everything possible to eliminate it from the recommendation system.
“Transparency with our community is important to us and these updates clarify or expand the types of behavior and content that we will remove from our platform or that will not appear in the ‘For You’ recommendations section,” said the director of Trust and Safety of the platform and signatory of the brief, Cormac Keenan.
These updates, which will implement will implement in the coming weeks, include the strengthening of its policy of hoaxes and dangerous challenges to help prevent the content of this type from continuing to spread on the platform.
Although this aspect was part of the review of its use policies, with which it developed a new resource for its Safety Center in November, Keenan has stressed that it will now be highlighted in a separate section of the suicide and self-harm category so that the community “can become familiar with these rules”.
In addition, the manager has encouraged the adolescent community to collaborate in their work to maintain security on the platform by evaluating ‘online’ challenges that respond to four steps: ‘Stop’ (Stop, take a break before continuing to use TikTok ), ‘Think’ (Think, know if it is a safe, harmful or real challenge), ‘Decide’ (Decide whether to do it or not to depend on the risk involved) and ‘Act’ (Act, to inform the platform about harmful or deceptive challenges, as well as stop their distribution).
To encourage this communication with the platform, it will launch a series of videos next week asking users to follow these four guidelines from the ‘Discover’ page, including the hashtag #SaferTogether.
With the update to the section of its user protection policies, TikTok will focus its efforts on expanding its focus on eating disorders. In addition to removing content that promotes eating disorders, it will begin to remove the promotion of unhealthy practices with the help of subject matter experts, researchers, and physicians.
“Our goal is to recognize more symptoms, such as excessive exercise or intermittent fasting, that often go unrecognized as signs of a potential problem,” the statement said.
TikTok has also clarified with this new review of its policies the different types of hate ideologies prohibited on the platform, such as ‘deadnaming’ -referring to someone transgender with the name with which they were baptized-, misgendering -use of words to refer to a person who does not correspond to their gender identity- or misogyny, as well as content that supports sexual conversion therapy programs.
Finally, TikTok plans to expand its policies to protect the integrity of its users, prohibiting unauthorized access to the platform and user accounts, content, or data for criminal purposes.
With this, the manager has announced that state-of-the-art cyber incident investigation and monitoring centres and information collection centres are being opened this year in Washington DC, Dublin and Singapore, within the Fusion Center operations space framework.
91 Million Videos Deleted
Along with TikTok’s new security policies, the platform has released the results of its most recent report, which exposes the total number of videos and accounts deleted for breaching its conditions of use, published this Tuesday.
The data reveals that in total, more than 91 million videos that violated TikTok’s rules were removed during the third quarter of 2021, which represents about 1 per cent of all videos uploaded to the platform.
Of those videos, 95.1 per cent were deleted before a user reported them for inappropriate content, 88.8 per cent before the video received views, and 93.9 per cent within the first 24 hours afterwards to your publication.
“We continue to expand our system, which detects and removes certain categories of violations when uploading videos, such as nudity, sexual activities, child safety and illegal activities”, which has increased the volume of automatic removals, as Keenan has pointed out in this statement.
Should remember that the country with the highest volume of deleted videos is the United States, where deleted a total of 13,918,537 videos from July 1, 2021, to September 30 of the same year, followed by Indonesia (6,881,904), Brazil (6,173,414), Pakistan (6,019,754) and Russia (5,906,859).
Also Read: Fundamental Concepts of Programming