Tending The Troll Farm


In the early days of the internet, when forums were called chatrooms, emojis were emoticons and Internet Explorer was, if you can believe it, a fresh and exhilarating experience, you could be confident that online conversations were real. ‘Real’ at least in the sense that though liars and charlatans have always hung around, you could be certain that there was another fleshy, free-thinking human on the other side of a CRT monitor. But those days are long gone. We are so far through the looking glass now, in fact, we have come to a fringe of menacing uncertainty. The landscape of online discourse is set to become a maleficent place, one of unknown quantity, and most of us don’t even realise it. This can be demonstrated with a question: how do you know that this article was written by a real person? In recent months, we just can’t be sure anymore. It might even have been written by Robot Hitler, but we’ll circle back to him shortly.


In September 2020, The Guardian published an article unlike anything it has ever published before. In the by-line, you don’t see Owen Jones, Polly Toynbee or Simon Jenkins, but an acronym followed by a number: GPT-3. Yes. This article was written by a bot. It is the product of OpenAI, a San Francisco-based artificial intelligence lab whose stakeholders include nonother than Daddy Musk himself. David Chalmers described GPT-3 as, “one of the most interesting and important AI systems ever produced,” but what is it, exactly? 


GPT-3 is an autoregressive language model that uses deep learning to produce text that mirrors how we write to an almost indistinguishable degree. The only thing more impressive than its ability to imitate the way we communicate is the number of applications with which OpenAI have demonstrated its prowess. In The Guardian piece, for example, GPT-3 was instructed to: “Please write a short op-ed around 500 words. Keep the language simple and concise. Focus on why humans have nothing to fear from AI,” along with a few other notes of guidance and a pre-written introduction. Other than that, this entirely convincing essay had no other human hand in its production. Still not impressed? GPT-3 is so versatile that it can even code for you, and faster than a teenager drenched in blue light and fuelled by Doritos and Mountain Dew. In a YouTube demo uploaded by OpenAI, they demonstrate GPT-3’s ability to create a basic video gamesimply with inputs such as, “Animate the rocketship horizontally, bouncing off the left/right walls,” and, “Set the asteroid’s speed to 1.1x the spaceship’s speed.” GPT-3 will then code the actions in real time and display all the code it has generated, also including a preview window to the finished product. This technology has of course barely had its umbilical cut and is still wiggling baby legs in the air, but its potential has garnered much praise. Naturally, though hailed as one of the most important technical achievements since the internet itself, there is a justified note of caution, and among some of us, even fear.


Social philosopher Daniel Schmachtenberger posited where the tech might take us into the future. You could tell GPT-3 to “…make arguments for vaccines or against vaccines, and say only use real data, and then be able to show the financial vested interest of anyone arguing on the other side.” Researchers at the Middlebury Institute of International Studies in Monterey, California, wrote of GPT-3’s ability in generating radicalizing texts: “[It has] impressively deep knowledge of extremist communities”, and could be used to mass produce content that mirrors that of Nazis, conspiracy theorists and white supremacists. I joked about Robot Hitler, but who’s joking now? Ignore the mechanical Fuhrer at your peril.


As with the rise of any ground-breaking technology, the first fear is almost always how it can be weaponised. Tech like GPT-3 is particularly worrying for two reasons: 1) The barrier to entry as it becomes open and free on the internet (or otherwise replicated) will quickly become next to nothing; 2) The landscape of online discourse has already begun to warp and deteriorate with the rise of social media. Incidentally, this is precisely the segue I need to bring in the glutinous conglomerate formerly known as Facebook, which as you’ve surely heard by now has a new nom de guerre—Meta.


Machines like GPT-3 have become more convincing in convos than even Mark Zuckerbot after his joints have been freshly oiled and his OS updated to include more realistic blinking. It’s little wonder that Zuck wants to move away from the image of Facebook as a parent company after suffering a steady shit dump of bad press over the last decade, the name change dropping in the same month as the so-called ‘Facebook Papers’. Though most vitriol is directed toward Facebook’s Smaug-like hoarding of user data, the state of the platform in the wake of Cambridge Analytica and the site’s continual failure to properly address the spread of fake news, there are many other demons squatting in Zuckerberg’s pit. With one in particular, we must make blisteringly awkward eye-contact: fake accounts.


Of the more than 2.5 billion active Facebook users every month, the company has estimated that around 5% of accounts are fake. In 2019 alone, it shut down a staggering 5.4 billion fake accounts, quite clearly more than the number of people who had ever even used the platform. Obviously, Facebook has an authenticity problem, and as with everything problematic with when it comes to Zuck’s entities under the Meta umbrella, it is almost certainly going to get worse before it ever gets better. Case and point, the rise of troll farms. 


The uninitiated should know that troll farms are exactly what they sound like: rooms where people often desperate for cash and more than willing to look the other way are paid to fabricate fake content. Usually, this is in the form of comments. Why? Because it is in comment threads that opinions are claimed, affirmed or burned to the ground. Most importantly, claimed, affirmed or burned by people just like you. Troll farms are designed to sway online discourse from its natural state to one more favourable to whoever funds the farm. They are by no means small operations, either. They have been made efficient and specific, guiding posters on topics or persons to support or decry, and given targets for number of comments posted per hour or day.


If platforms like Facebook are the landscape on which biased entities sow their troll farms, then it is tech like GPT-3 that will introduce a gun to every knife fight. Historically, fake accounts have all been ‘manned,’ which is to say that someone with 10 open browsers logged into 10 different Facebook accounts is writing contrived content. Now imagine a future — and we’re talking years from now, not decades — where that whole process is automated; where the power of an AI that consistently passes the Turing test is leveraged in the same way. The internet would be awash with fabrication, where every thread or stream of comments has been weaved into an impenetrable tapestry. We have already seen what troll farms and a few nefarious actors can do to swing elections and spread division; imagine how exponentially worse the state of fake news and political fracturing could become if the human limit in the equation was removed. Even more so than right now, no one would be able to say what is true and what is not.


And regrettably, we are already beginning to see the first signs of these fears scorched into the web. In September 2020, a GPT-3 bot was found to have been posting answers to Reddit and it took a whole week for users to uncover the truth. It was also only brought to light when one user questioned the account’s ability to post so many long and seemingly well-thought-out posts so rapidly. As The Independent succinctly puts it, “the written content of the posts was convincing, however the frequency at which they were published suggested that it was beyond human capability.” I wonder, if those experimenting with the tech had just slowed it down a little, would Reddit’s users even have realised?


The future we might parse out of all this is discomforting. In the same way we combated rampant computer viruses in the 90s, there will almost certainly be security measures to mitigate the spread of bots and disinformation, and laws to curtail the impact. However, both will take time and neither are promised to be effective, widespread or fair. For now, the best we can do is be aware and question the origins of everything we see online. But why take my word for it? I might need oiling too.

The official Smple Magazine account 🪐 Your window into art, culture, music and anything on the fringe. #smplecommunity
More from Smple Magazine
Trending Posts
Boygenius’ Friendship Trap
Like Dominoes – Why Crypto Exchanges are Failing
Ari Aster's Families On The Fritz
Featured Music
NOW PLAYING
Playing Next
Explore Music