To bypass the scrutiny of this platform's censorship laws, a myriad of harmful slang terms has surfaced.
In early 2023, American actress and model Julia Fox faced extensive criticism for inadvertently commenting on a TikTok video using slang terms (words commonly used in communication to conceal the true meaning of common words).
As reported, a TikToker named Conor Whipple wrote in the description of his video: 'I gave mascara to a girl and it was so amazing that the girl decided she and her friend would try it without my consent.'

In the comments section of the video, Julia Fox stated: 'I don't understand why but I don't feel anything sad about it.'
Her comment was immediately under attack as a slew of users, including the video's owner, accused the actress of endorsing sexual assault against others.
Immediately after the incident, Julia quickly grasped the issue and explained that she had no idea that the term 'mascara' (a makeup item) was being used on this platform with a sexual connotation, and she apologized to the post owner.
Coded 18+ slang terms
On TikTok, 'mascara' is just one of many terms encoded in the newly emerging internet language known as 'algospeak' (only terms that social media users creatively devise to deceive AI's automated content moderation systems).
To bypass this moderation law, sometimes unrelated terms like 'mascara,' 'ana,' nonsense words with similar pronunciation to their roots like 'seggs' (sex), or 'cutt1ing' (cutting), and 'unalive' (dead) have been created.

Alongside these are terms like 'Yt people' widely used to refer to white people or 'Le$bean' replacing lesbians in videos. As TikTok tries to limit Covid-19-related content, 'panini' and 'panoramic' have become slang for 'pandemic.' When it comes to banned substances, terms like 'snow' or 'skiing' are also used.
Although these slang terms enable TikTok users to express their thoughts without fear of censorship, they also serve as a 'gateway' for a series of dangerous content related to banned substances, sexual activities, or self-harm to spread more easily and forcefully than ever before.
According to a survey conducted by TELUS International, 51% of internet users claim they can recognize the slang used on TikTok. When focusing solely on Gen Z users, this figure skyrocketed to 72%, indicating the influence and prevalence of these slang terms.

According to TikTok's community guidelines, the app 'does not allow content that describes, spreads, or glorifies activities that may lead to suicide, self-harm, or eating disorders.'
However, despite moderation efforts, the 'encoded' words mentioned above even make groups of users concerned about these negative issues find each other faster. This is deemed truly dangerous, especially when users are seeking something harmful to themselves.
Difficult to prevent harmful content
According to artificial intelligence expert Vince Lynch, this phenomenon is increasingly common and poses enormous challenges to content moderation.
According to the company's own data, TikTok has removed 110 million videos just from July to September 2022, accounting for about 1% of the total content posted on the app.
Roughly half of the videos are automatically deleted by AI for containing prohibited content. That's why content creators have begun substituting strong, sensitive language with encoded terms.

Slang terms are considered one of the biggest hindrances to moderation, especially with TikTok having millions of young users, not to mention underage users lying about their age to access adult content.
Although the advanced technology behind content moderation is evolving rapidly, Lynch worries that this will make circumvention tactics increasingly diverse and turn content moderation into an endless 'cat-and-mouse game.'
'The more regulations are introduced on social media platforms, the more users will find ways to circumvent them.' - Lynch remarks
Source: NY Post
