What’s the Buzz with AI?

Download and print this guide here. Share your Data Detox experience, keep in touch, or get inspiration for activities by writing to Safa at datadetox@tacticaltech.org!

‘AI’ has become a buzzword that’s used to describe all kinds of tools and applications – from virtual assistants to deepfake generators.

There’s a lot of attention on the fun or creative aspects of these ‘cool’ new tools… but did you know that AI is also at work behind many of the essential systems that affect our everyday lives, including employment, health care, education, law enforcement, and so much more?

AI tools are making some systems a lot faster and more efficient. But that also means that it’s supercharging the speed and efficiency of other things, too – including online harms like misinformation, scams, and life-altering harassment… even influencing people how to vote.

As you follow this Data Detox, you’ll get a closer look at AI and see that all this buzz can have a big sting.

Let’s go!


1. Demystify AI

Understanding AI doesn’t have to feel like rocket science. Some people talk about AI as if it’s magic, but ‘artificial intelligence’ is just a machine.

Did you know? AI is not just one thing. Simply put, AI tools are computer programs that have been fed a lot of data to help them make predictions. “AI” refers to a variety of tools designed to recognise patterns, solve problems, and make decisions at a much greater speed and scale than humans can.

But like any tool, AI is designed and programmed by humans. The people who create these machines give them rules to follow: “Do this; but don’t do that.” Knowing that AI tools are automated systems with their own human-influenced limitations can give you more confidence to talk about the capabilities and drawbacks of AI.

When people talk about AI, they could be talking about so many things. Check out some examples of AI tools that are especially popular (click or tap on the cards below to read further information):

Text-generation tools

Text-generation tools

Text-generation tools create content based on certain keywords (or “prompts”) you define. They are trained on large amounts of text from the internet, of varying degrees of quality.

You might hear these referred to as “large language models” (LLMs) or by specific product names like ChatGPT, or even more casual terms like “chatbots” or “AI assistants.”

While these tools have been known to achieve feats of human-like intelligence, like acing exams (1) they’re also known to “hallucinate” - meaning they also generate text that's inaccurate (2).

Image-generation tools

Image-generation tools

Image-generation tools create pictures or videos based on certain keywords you define.

You might hear about these referred to as text-to-image models, or even by specific product names like DALL-E or Stable Diffusion.

These tools can produce incredibly believable images and videos – but are also known to reduce the world to stereotypes (3) and can be used for sextortion and harassment (4).

Recommendation systems

Recommendation systems

Recommendation systems show you content that they ‘predict’ you’re most likely to click on or engage with. These systems are working in the background of search engines, social media feeds, and auto-play on YouTube.

You might also hear these referred to as algorithms.

These tools can give you more of what you’re already interested in, and can also nudge you down certain dangerous rabbit holes (5). Recommendation systems are used in important decisions like job hiring, college admissions, home loans, and other areas of daily life (6).

Learn more about these popular types of AI: (1) “Here's a list of difficult exams the ChatGPT and GPT-4 have passed”, (2) “Disinformation researchers raise alarms about AI chatbots”, (3) “How AI reduces the world to stereotypes”, (4) “FBI says artificial intelligence being used for ‘sextortion’ and harassment”, (5) “TikTok’s ‘For You’ feed risks pushing children and young people toward harmful mental health content”, (6) “Cathy O’Neil on the unchecked power of algorithms”.

⇢ Learn about some of the flaws of AI in Mirror Images: Reflecting on Bias in Technology.


2. Feel the Weight of AI

While AI tools might feel “virtual”, they have a real impact on the physical environment. Knowing that AI uses tons of natural resources gives you a clearer picture of what it actually is.

Did you know? While companies market AI as sleek and lightweight, there are many natural resources used to power those tools. The buildings that house the servers that power AI and the internet are called ‘data centers’. They get so hot, they need air conditioners blasting around the clock. The combination of servers and cooling pipes produces a lot of noise and uses a lot of natural resources.

Almost everything digital on your phone, including AI-generated content, is stored in data centers that need large amounts of land, water, and energy. As of 2023, there were over 8,000 of these giant warehouse-like buildings around the world.

Researchers have found that an AI-enabled Google search uses 10 times the amount of energy as a regular Google search.

Once you know how AI is made and what it takes to run, you’ll feel the heaviness of these systems. And maybe you’ll be able to challenge the next advertisement you see that markets them as magical “efficiency machines.”

Try it! See if you can answer the following questions about data centers correctly…

Which one has a bigger carbon footprint?

The ‘cloud’

The airline industry

Per day, a large data center consumes the same amount of drinking water as how many people?

5,000 people

500,000 people

Data centers sound as loud as…

Leaves rustling

A heavy metal concert

Learn more: (1) “The Cloud Is Material: On the Environmental Impacts of Computation and Data Storage”, (2) “Data Center Water Usage: A Comprehensive Guide”, (3) “Data Center Noise Levels”.

Data centers are just one physical manifestation of AI. To get a full picture, you’d have to look into mining, manufacturing, production, and disposal of devices, servers, satellites, undersea internet cables, and other parts.

Did you know? About 85% of the impact of a smartphone comes from its production alone. The e-waste produced by technology is also important to understand – and the statistics continue to grow each year.

So far, there are few accessible resources that expose the environmental impacts of AI. But if you’ve got an eye for infographics, check out Anatomy of an AI System to see some of the extractions and impacts of the tools.

⇢ Learn how to make your devices last longer in Repairing is caring: sustain your devices to reduce e-waste (and save money).


3. Be Aware of Synthetic Media

When people talk about AI, they’re often referring to generative AI and its output, synthetic media: namely, the texts, images, videos, and audio generated by AI tools. These outputs look apparently real but are actually generated by computer programs. Synthetic media is created for different reasons; for example, businesses might use it to advertise their products, magazines might use it to produce content, and political candidates might use it to boost their public image.

Tip: Seek out examples of synthetic media in order to gain a better understanding of how realistic AI-generated images, videos, and voice clones can be. You can test yourself with a quiz like Berkley’s AI or Not or AI-Generated or Not. The results might surprise you!

There are also many websites with names like “This person does not exist” which show you realistic images of ‘people’ that were generated by AI. You can find one such website here. Refresh the page to see more images. If you saw one of these pictures used as a profile picture on a normal day, would you think it was a real person?

Going down the synthetic media rabbit-hole can quickly lead to unsafe content, so be careful what you generate and what you search for. Also, oftentimes companies behind AI tools do not clearly explain what data they’re collecting from you and for what purposes, so always proceed with caution.

Try it: Input text prompts that can generate AI images using a free tool like Perchance to see it in action. There, you can type a description of what you want to see, like “cat wearing a red scarf” and then hit “generate”.

Now that you’ve seen examples of synthetic media, can you see why people so easily fall for it?

⇢ Read more about how synthetic media can be used to amplify online harms in The Elephant in the Room: AI and online harassment.


4. Seek Verification

As you’ve learned, just because content created by generative AI looks and sounds realistic, doesn’t mean that it is. If you see something online or in your feed that’s shocking, strange, or especially out of the ordinary, it’s possible that generative AI tools may have been used to create or tamper with it – even if it’s hard to tell with your naked eye.

Tip: Online images, videos, and texts that make you feel intense emotions like fear, disgust, awe, anger, and anxiety are most likely to go viral. This highly emotional content is also an effective way to get clicks and spread misinformation – and AI tools can help boost that virality. Pay attention to your reactions and take these feelings as a hint that you need more time to verify if what you’re seeing or reading is legitimate.

For example, if you see a video of a political candidate doing or saying something that raises your alarm bells, do some extra research to see if it’s authentic or whether it may have already been debunked as AI-generated misinformation.

You can rely on certain global bodies such as the International Fact-Checking Network to find out which sources take extra care with verifying the information they publish. On the Signatories page (ifcncodeofprinciples.poynter.org/signatories), search for your country to see which sources made the list. Snopes.com and PolitiFact.com are two solid resources for readers in the United States.

⇢ Read more about AI-fueled persuasion in Persuasive Powers: Reveal AI election influence.

But wait, that’s not it! There’s so much more to say about AI. The Data Detox Kit includes many guides about AI, as well as other topics like data privacy, digital safety, virtual wellbeing, online misinformation, and more.


Written by Safa Ghnaim in Summer 2024. Thanks to Christy Lange and Louise Hisayasu for their edits, comments, and reviews.

This guide was developed by Tactical Tech in collaboration with Goethe-Institut Brazil.

Last updated on: 8/23/2024