Posted By Smple Staff

TikTok’s Algorithm: The Rabbit Hole to Radicalisation and Misinformation?





People have long been aware of the spectre that is data collection. For years, our habits, thoughts and interests have been recorded and regurgitated back to us through increasingly targeted content and ads. It has begun to seem normal. Algorithms that target consumer interests are everywhere - but have we finally reached the tipping point?
 
Jokes about Google’s targeted ads - and their hilarious accuracy - are very common. Have an interest in horses? Here are five Etsy artists who specialise in equestrian art. Did a late-night existential crisis lead you to google Chinese mysticism? Here are four amazing oil diffusers to aid your spiritual journey.
 
No matter how stringent your cookie policy or privacy settings are, search engines always seem to create an increasingly accurate profile of your consumer habits and curate a worryingly appealing gallery to accompany you around the internet.
 
Purveyors of targeted advertising may argue that it is a tool of convenience. It is great that the internet knows that you are obsessed with matcha tea, and can now tell you about all the accessories such an interest can bring into your life. Targeted ads are supposed to be a time-saving tool in the chaos that is modern living.
 
When considered on the individual level, the process of targeted advertising is somewhat understandable - it has an obvious input and output. But in reality, there are millions of users; who all contribute to this machine’s understanding of consumer trends, moods, and dislikes. Considered on the macro level, the algorithm becomes incomprehensible - a collection of constantly changing calculations, which factors in our late-night browsing and builds it into an incessant stream of information.
 
These algorithms have become ‘black boxes’ to anyone who isn’t a data scientist. We understand that our data goes in, but cannot explain what comes out. All we understand is that they have become a ubiquitous part of our lives. 
 
However, even if we don’t understand them, their unchallenged ubiquity perhaps relies on this simple equation. Or rather, the assumed directional flow of information. We input data and receive data as output. Implicit within this equation is the assumption that the power is in the favour of the user, rather than the algorithm. But what if this is no longer entirely accurate? Consider the reverse: the flow of information is actually the opposite. Once the algorithm has started to spin its web, we do not have true agency; we can only react.
 
This idea has become increasingly prevalent in the context of TikTok. In comparison to other entertainment platforms, such as YouTube or Netflix, it dictates rather than recommends what you watch while you are scrolling. It is also infamously adept at predicting what you want to see. 
 
Unfortunately, the ‘black box’ that is the TikTok algorithm is locked behind many layers of copyright and data protection laws, so not even data scientists can get a clear idea of how it works. Officially, the company says that shares, likes, follows, and what you watch, as well as other factors, all play a role in what TikTok shows you. 
 
However, despite TikTok’s statements on how its algorithm functions, an investigation by The Wall Street Journal uncovered that only one thing is necessary for the algorithm to work out your interests: how long you hover over each video. Each hesitation, pause, or replay, provides the algorithm with potential insight into your hidden desires. This, in turn, creates a mimetic conversation between the user and the software. In other words, it is a sophisticated, circular suggestion cycle. Not only does TikTok acquire seemingly oracle-like abilities in what content you want to watch, but it also becomes better and better at drawing you in. It can begin to reinforce dominant themes within your interests, ultimately making it even harder to escape the rabbit hole. This ability to predict and enforce interests becomes more sinister once we consider it in the context of radicalisation. 

As the algorithm gets to know you, the videos that make it to your ‘For You’ page become more specific. While in the beginning, the videos you saw would have had millions of views, as time passes this changes. With the decrease in views, there is a decrease in the likelihood of moderation. This is when we begin to see videos that have slipped through community guidelines.

This can result in the app tunnelling vulnerable users towards obscure and disturbing subjects. For instance, pausing or rewatching a video with the #sad in the description, will literally lead to a flood of sadness on your TikTok ‘For You’ page. Studies have shown that following the algorithm to the bottom of this particular rabbit hole will lead to videos promoting suicide and self-harm

Guillaume Chaslot, an expert in AI and data scientist, explains that the “algorithm is pushing the user to more and more extreme content, so it can push towards more and more watch time.” While this isn’t vastly different to, say, YouTube suggesting what videos to watch next, what makes TikTok different is the passivity of the user during this process. In terms of user experience, TikTok has shortened the gap between interface and user, so you do not need to make any conscious decisions, just keep mindlessly scrolling. Consequently, those with a general interest in a subject can sink into more and more specialised content, without even realising. For example, a user with only a general interest in politics could end up being served videos on the topic of election conspiracies, and QAnon.
 
Sifting through different areas of right-wing politics, Olivia Little and Abbie Richard, who worked with the organisation Media Matters, found that viewers with an interest in anti-trans content were more likely to see other extreme right-wing content. After creating a fake account and analysing over 400 videos, they studied how quickly the account’s ‘For You’ page became populated with far-right content. In surprisingly little time, multiple videos containing content that violated TikTok’s ban on ‘hateful behaviour’ began to appear. When concluding their study, they characterised ‘anti-trans’ content as a ‘gateway drug’ to even more extreme content. In other words, this ‘gateway prejudice’ can lead users to discover and assimilate content that was misogynistic, racist, anti-vaccine or homophobic - solely by interacting with transphobic content. 

While awareness of misinformation and radicalisation has long been present in conversations surrounding social media, TikTok seems to have created its own unique landscape. A landscape, which is increasingly populated with hateful and malignant content. The Insititute of Strategic Dialogue has pointed out that there “the algorithmic systems underlying TikTok’s product are evidently helping to promote and amplify this content to audiences that might not otherwise have found it”. Again, this content consistently slips through the ‘enforcement gap’ that TikTok claims to be closing.

This problem of misinformation has recently been highlighted by the conflict in Ukraine, where several old videos of military drills went viral, despite many being years old. Not only have these videos slipped past moderators, but there is also no easy way to check their reliability while on the app. In fact, users can upload and post videos indiscriminately and not even provide as much as a watermark or attribute any sources when making claims about unfolding events. 

On a broader level, it is easy to see how mythos grows around artificial intelligence and machine learning. It makes for a good story. But because the TikTok algorithm is cloaked in secrecy, this inevitably adds to it's profile within any narrative assessing its power to influence human behaviour, perhaps even inaccurately augmenting its influence. If we cut away this glamourisation, it is easier to sympathise with proponents who argue that it is not the algorithm’s fault, that it is a neutral factor, and it’s how it is used that makes it malevolent. It is not inherently bad. 

However, what is important to remember is that the underlying goal is profit. The algorithm’s main aim is to appeal to our desires (no matter how harmful) and to endlessly draw out watch time. With this raison d'être, there is little wonder that it can enlarge and distort the worst parts of ourselves.

More from Florence Long
Trending Posts
Rug-pulls, Rampant Fraud and the NFT Wasteland
Our Favourite SMPLE Films So far
The Rise of Analog Horror
The Many Lives Of Kanye West Pt. 2
Featured Music
NOW PLAYING
Playing Next
Explore Music