Mirror Images

Reflecting on Bias in Technology

What kind of assumptions do you make about people who let their dog sleep in their bed? Or people who prefer to drink sparkling water? When you imagine that person, who do you see?

It is possible that some ideas you have might be based on biases you hold without realizing it. Though it may seem like a ‘neutral’ technology’, AI has biases, too.

In this Data Detox guide, we’ll look at some examples of why AI may not be the objective tool people think it is. We’ll do a series of self-reflections to strengthen our ability to identify biases, holding up a mirror to society… and ourselves.

Through the looking glass we go!


Demystify AI

Understanding AI doesn’t have to feel like rocket science. Some people talk about AI as if it’s magic, but ‘artificial intelligence’ is just a machine.

Did you know? AI is not just one thing. Simply put, AI tools are computer programs that have been fed a lot of data to help them make predictions. “AI” refers to a variety of tools designed to recognize patterns, solve problems, and make decisions at a much greater speed and scale than humans can.

But like any tool, AI is designed and programmed by humans. The people who create these machines give them rules to follow: “Do this; but don’t do that.” Knowing that AI tools are automated systems with their own human-influenced limitations can give you more confidence to talk about the capabilities and drawbacks of AI.

When people talk about AI, they could be talking about so many things. Check out some examples of AI tools that are especially popular (click or tap on the cards below to read further information):

Text-generation tools

Text-generation tools

Text-generation tools create content based on certain keywords (or “prompts”) you define. They are trained on large amounts of text from the internet, of varying degrees of quality.

You might hear these referred to as “large language models” (LLMs) or by specific product names like ChatGPT, or even more casual terms like “chatbots” or “AI assistants.”

While these tools have been known to achieve feats of human-like intelligence, like acing exams (1) they’re also known to “hallucinate” - meaning they also generate text that is inaccurate (2).

Image-generation tools

Image-generation tools

Image-generation tools create pictures or videos based on certain keywords you define.

You might hear about these referred to as text-to-image models, or even by specific product names like DALL-E or Stable Diffusion.

These tools can produce incredibly believable images and videos – but are also known to reduce the world to stereotypes (3) and can be used for sextortion and harassment (4).

Recommendation systems

Recommendation systems

Recommendation systems show you content that they ‘predict’ you’re most likely to click on or engage with. These systems are working in the background of search engines, social media feeds, and auto-play on YouTube.

You might also hear these referred to as algorithms.

These tools can give you more of what you’re already interested in, and can also nudge you down certain dangerous rabbit holes (5). Recommendation systems are used in important decisions like job hiring, college admissions, home loans, and other areas of daily life (6).

Learn more about these popular types of AI: (1) “Here's a list of difficult exams the ChatGPT and GPT-4 have passed”, (2) “Disinformation researchers raise alarms about AI chatbots”, (3) “How AI reduces the world to stereotypes”, (4) “FBI says artificial intelligence being used for ‘sextortion’ and harassment”, (5) “TikTok’s ‘For You’ feed risks pushing children and young people toward harmful mental health content”, (6) “Cathy O’Neil on the unchecked power of algorithms”.

Expose the cracks in AI

Now that we’ve looked at some popular AI tools, let’s zoom in on how chatbots work, which are tools that generate texts.

AI is designed by people and trained on data sets. Just like you, the people who build it have certain beliefs, opinions and experiences that inform their choices, whether they realize it or not. The engineers and companies that build and train AI may think certain information or goals are more important than others. Depending on which data sets they ‘feed’ to the AI tools they build – like algorithms or chatbots – those machines might serve up biased results. That’s why AI can produce inaccurate data, generate false assumptions, or make the same bad decisions as a person.

Chatbots have been fed so much data that they can write computer code and ace exams. But they also present things as facts that aren’t always true. They can also generate texts that repeat the biases that already existed in their training data or in the programmers who trained them.

While some experts believe text generation tools are getting ‘smarter’ on their own, others say they’re not actually understanding the words they repeat. Here are some reasons why you might want to think about the biases behind what chatbots tell you:

  • Some of the data they’re trained on might be personal, copyrighted, or used without permission.
  • Depending on the data sets, they might be full of hate speech, conspiracy theories, or information that’s just plain wrong.
  • The data might be biased against certain people, genders, cultures, religions, jobs, or circumstances.

Did you know? AI tools are trained on data that also leaves stuff out altogether. If there’s little or no information about a group of people, language, or culture in the training data, it won’t be able to generate any answers about them. A key 2018 study by Joy Buolamwini called “Gender Shades” identified how widespread facial recognition systems struggled to identify the faces of People of Color, especially Black women.

Shine a spotlight on bias

Now that you know about some of the weaknesses that can exist in AI data sets, which are built by people like us, let’s take a look at ourselves. How can the way our human brains work shed light on AI's biases?

Imagine this scenario: you see a news headline about a topic you care about and before even clicking on it, you can imagine what it will say. It’s not because we’re all fortune tellers – it’s because we hold preconceived notions about certain topics. Perhaps you sometimes perpetuate biases by over-generalizing or jumping to conclusions:

  • Over-generalizing means reaching hasty judgments from insufficient information.
  • Jumping to conclusions describes making conclusions with little or no evidence.

If you’ve done either of these, you’re not alone! These thought patterns are not uncommon. The key is to become aware of them and take concrete steps to ensure they stay in check.

Try it! Think back to an earlier example: people who let their dog sleep in their bed. Imagine who those people are, generally speaking. What do people who do that have in common?

  • Where do people who do that tend to live?
  • How much money do people who do that tend to make?
  • How clean or messy do they tend to keep their houses?

If you could imagine certain characteristics or behaviors of people who let their dog sleep in their bed, you may have just over-generalized or jumped to conclusions about them.

If you did not make generalizations, and recognize that a habit does not necessarily define other aspects of their lives – that’s great! But hold on! Before you think you’re above bias, stay with us until the end because there is much more to explore.

There are types of biases that are more deeply ingrained in individuals, organizations, cultures, and societies. Shine a light on them by reflecting on these questions:

  • How do you expect others to present themselves, including how they behave, dress, and speak?
  • Are there any groups that face more risk, punishment, or stigmatization due to what they look like or how they behave, dress, or speak?

The biases you just reflected on often rely on assumptions, attitudes, and stereotypes that have been part of cultures for a very long time and can influence your decision-making in unconscious ways. This is why they’re called implicit biases – they’re often hard-wired into your mindset, difficult to spot, and uncomfortable to confront.

Common implicit biases include:

  • Gender bias: the tendency to jump to conclusions regarding people from different genders based on prejudices or stereotypes.
  • Racial and/or ethnic bias: the tendency to jump to conclusions regarding people based on the color of their skin, cultural background, and/or ethnicities.

Try it! Harvard has a huge library of implicit bias tests you can try for free online to see how you do and which areas you can work on.

With a lot of implicit biases, it can feel like a journey to even identify those beliefs. It’s unlikely to happen overnight, but why not start now?

Everything is m(ai)gnified

Now that you’ve seen common examples of these thought patterns and implicit biases, imagine what they might look like on a much larger scale. Thought patterns and implicit biases such as these can affect not only individuals but whole groups of people, especially when they get ‘hard-coded’ into computer systems.

Try it! Using the free text-to-image generation software Perchance.org, the prompt “beautiful woman” returns the following results:

Six AI-generated images of white women with blue eyes and wavy brown hair, wearing low-cut spaghetti strap tanktops AI images generated on Perchance.org on 13th August 2024

If the tool created six images of “beautiful women”, why do they all look almost identical?

Try it yourself – do your results differ?

Bigger studies have been conducted on this topic, with similar results. You can read about one such study and see infographics here: “Humans are biased. Generative AI is even worse”.

AI tools are not neutral or unbiased. They are owned and built by people with their own motivations. Even AI tools that include “open” in their name may not necessarily be transparent about how they operate and may have been programmed with built-in biases.

Tip: Interrogate AI. Ask critical questions about how AI models are built and trained to get a sense of how AI is part of a larger system. You can ask:

  • Who owns the companies that create AI models?
  • How do the companies profit?
  • What are the systems of power created or maintained by the companies?
  • Who benefits from the AI tools the most?
  • Who is most at-risk for harm from these AI systems?

The answers to these questions might be difficult or impossible to find. That in and of itself is meaningful.

Since technology is built by people and informed by data (which is also collected and labeled by people), we can think of technology as a mirror of the issues that are already existing in society. And we can count on the fact that AI-powered tools reinforce power imbalances and systematize and perpetuate biases, but more rapidly than ever before.

As you’ve learned, flawed thought patterns are totally normal and everyone has them in one way or another. Starting to face the facts today can help avoid making mistakes tomorrow, and can help you identify flaws within the systems – like AI.


Written by Safa Ghnaim in Summer 2024. Thanks to Christy Lange and Louise Hisayasu for their edits, comments, and reviews.

This guide was developed by Tactical Tech in collaboration with Goethe-Institut Brazil.

Last updated on: 8/23/2024