Bot or Not?
Recognize inauthentic activity online
The contents of this guide have been adapted from the Digital Enquirer Kit written by Tactical Tech and produced by GIZ.
Between life hacks, news, and inspiration, not all the information you consume online is equal. You may rely on other people for tips on how to stay healthy, care for your child, or who to vote for in the next election... With such sensitive topics, how can you know who to trust?
Capturing your attention online can be a lucrative business. Your clicks, taps, likes, and shares (aka your “engagements”) with posts and pages online can result in rewards for the individuals sharing that content, whether it be money or popularity (to name a few).
Some people will do just about anything to make that happen, even if that means purchasing followers, likes, shares, and advertisements to boost their content.
Furthermore, not all the accounts you see on social media are authentic people. Some are automated computer programs, while others are people who will say just about anything in order to receive a reward.
In this Data Detox you’ll learn to recognize who‘s authentic online vs. who isn’t.
Let’s go!
The Trouble with Bots
A bot is a software application program created by people with specialized skills, like engineers and developers, to do some tasks more efficiently than people or to imitate human behavior, like hitting the ‘Like’ button on a social media post.
Bots are automated, meaning that a person sets it up and fixes any problems along the way, but otherwise the program runs on its own. Some bots are helpful (e.g., they can find and organize search results, and on platforms like Discord, they can play music, flag swearwords, and other handy things), while others can actually be harmful (e.g., they can promote dangerous ideas by reposting them thousands of times). Some are transparently labeled as bots, while others are specifically programmed to imitate humans—these have the greatest potential to cause harm.
Did you know? The personalities you see on social media websites like Instagram may actually be computer-made models (another form of bots imitating humans). Learn more about this phenomenon in this article: “CGI influencers are here. Who’s profiting from them should give you pause”.
Spot a Bot!
With bots becoming more and more prominent in digital media, it’s important to know how to recognize them. Here are a few clues:
Bots have a high volume of posts.
Bots have a high volume of posts.
Are there 100+ posts per day? Is it close to an important event (e.g., election, scandal)? That seems like automated behavior. Do the overall number of posts make sense in relation to the creation date?
Bot-posted content is suspicious.
Bot-posted content is suspicious.
Bot-posted content is suspicious, such as reposting identical content, inflammatory memes and GIFs, hashtag spamming, occasional off-brand shares, awkward phrases, out-of-context imagery, etc.
Bots have shared suspicious content in the past.
Bots have shared suspicious content in the past.
Look closely at their past behavior (postings and shares). Does anything else seem odd or suspicious? Have there been past shares that are inflammatory or promote unverified information?
Bot post times stand out.
Bot post times stand out.
Either the posts are happening continuously day and night, or maybe only at the exact same times each day. This behavior is not human.
Any one of these clues could also just be a coincidence, so make sure to look for more than one clue before you assume an account is a bot.
When in doubt, you can go one step further. Check organizations who monitor and investigate disinformation to see if the account has been flagged by them or if the claim has been debunked, such as Snopes, Politifact, and AfricaCheck.
Dive Deeper: Find out more signs of robotic behavior via DFRLab: “#BotSpot: Twelve Ways to Spot a Bot” and First Draft: “How to spot a bot (or not): The main indicators of online automation, co-ordination and inauthentic activity”.
Look for Clues: Websites like Social Blade and Social Bearing* analyze selected social media accounts at a glance, which can help you to more easily identify clues of automated sources of information.
*These tools have been included here as a resource, not as a recommendation. Proceed at your own discretion.
Popularity Isn’t a Sign of Credibility
You may (understandably) think that a social media page with millions of followers that promotes a product or lifestyle must be credible; however, it’s possible that its popularity was purchased and bots made their posts go viral!
In fact, anyone can purchase followers, likes, and shares. Businesses sell these types of “engagements” in packages of hundreds or thousands—some of them claim these are real people, when in fact, they’re bots.
What do you think?
If so many people are sharing a hashtag, it must be a popular opinion, right?
True
False
Bots show this isn’t always the case. A single bot may not do so much damage, but a botnet—a network of bot accounts programmed with the same goal (e.g., retweeting hashtags)—makes messages spread like wildfire. So, hashtags that go viral don’t necessarily mean that they represent a popular opinion.
Botnets refer to a number of internet-connected devices that run multiple bots at a time. Botnets have gained notoriety in recent years after a number of incidents on social media.
Botnets in the Media
A few noteworthy examples surfaced in 2020, as the coronavirus pandemic spread across the globe.
Bots creating panic
Tap to find out more
Click to find out more
In March 2020, Reuters reported on a disinformation campaign that was spread by botnets to create panic around the coronavirus pandemic. (1)
Bots shaping the conversation
Tap to find out more
Click to find out more
In May 2020, Carnegie Mellon University exposed that nearly half of the millions of accounts on Twitter posting about the coronavirus pandemic were likely bots! When you think about it, that’s a massive number of active social media accounts that are bots instead of real humans. (2)
Did You Know? “Appeal to Authority” is the tendency to trust an individual or organization simply due to their real or perceived reliability, power, or influence.
But just because someone is an expert in one particular subject area, doesn’t mean their other opinions should be misunderstood as facts. Furthermore, they could have purchased followers, likes, or shares of their posts—or they themselves could be a bot.
Remember to look into sources to see if they’re really qualified to speak on a topic and dig into what verified information backs up their claims.
A lot of the activity that seems suspicious at first glance should be double-checked. If the account has more than one red flag, you have more reason to be wary. If it has several red flags, then it might be a bot.
Now that you've learned how automated software programs can follow, like, and share content online, let’s discover some of the real people who get paid to engage and fulfill the same services.
Rewards for Reviews
The engagements you see online may not always come from bots, but from another source called ‘click farms’.
A click farm is where individuals are hired to do specific tasks like clicking on an advertisement, scrolling through a website for a certain period of time, and liking posts, to name a few.
Let’s learn more about click farms and how they work.
More attention equals more money.
More attention equals more money.
As with clickbait, the more attention an article, video, or image receives, the more money it’s likely to earn. Furthermore, a high number of engagements may result in the web page or post jumping to the top of the search or feed results, and might even encourage more individuals to click to see what all the hype is about.
Incentives are worth it.
Incentives are worth it.
High-value outcomes incentivize individuals and businesses to purchase activity from click farms in hopes that it will result in enough authentic engagements to make it all worthwhile.
They more easily go undetected.
They more easily go undetected.
Unlike bot activity, click farm activity is not always tied to a certain user account (e.g., with simple tasks like clicking on ads or scrolling through pages), and are more challenging for analytics tools to recognize.
The result: They are less likely to get blocked or banned by websites like Instagram and Twitter than bots.
Another form of insincerity that you may have come across on the internet without realizing it are incentivized reviews. These reviews are not written by genuine consumers (even if they’re presented as such), but by writers who receive some form of compensation or reward for each review.
The business of incentivized reviews is so common on websites like Amazon, where there’s a demand for services like ReviewMeta and Fakespot*, which analyze reviews and filter out potentially inauthentic ones. This can bring a product score from five stars down to one!
*These tools have been included here as a resource, not as a recommendation. Proceed at your own discretion.
Keep It Real... and Critical
Next time you face posts online that are surprising, infuriating, or too good to be true, just remember:
- Not all information and sources are authentic. Bots may be designed to appear like real people. But even real people may be paid to promote a product or write a review!
- As platforms become better at identifying and suspending the accounts of bots or incentivized individuals, the bots and incentivized accounts are becoming more careful and creative, making them even trickier to spot.
- Be critical about where you look for information and which content you click on and consume. Just because some content may receive a lot of attention, doesn’t mean that it’s genuine.
- Seek out verified and verifiable information. You can rely on trustworthy fact-checking organizations like EUvsDisinfo, AfricaCheck, Snopes, and one in your country through Reporters’ Lab.
Keep an eye on the red flags you’ve learned about in this Data Detox, and check out 6 Tips to Steer Clear of Misinformation Online for more advice.