The Future of AI — Biased or Unbiased?

by Markiesha Thompson Jan/Feb 2023
Will AI take over the world? Should it? Prejudices like racism and sexism show up in AI programming and software that is supposed to be neutral. In an increasingly technological world, Monitor takes a closer look at the ways this innovation can be harmful.

Markiesha Thompson ,
Associate Editor,
Monitor

The future of the industry is headed towards a technology driven era, where artificial intelligence (AI) is put in place of human thinking and operations. AI if often seen as the way of the future that will benefit humans in many ways, replacing some mundane tasks. However, AI has been shown to be biased in many ways due to human programming and learned actions that stem from it, according to researchers from University of Washington.1 These researchers found this has led to continued practices of racism and sexism in various industries, which has led to the question, is AI biased?

According to a study, “Robots Enact Malignant Stereotypes,” robots and AI are programmed with algorithms that have racism and sexism built into the systems, which leads to AI using those algorithms to guide its operations.2 As a result, many people experience racism and sexism at the hands of this technology, shifting the blame from humans to machines. These built in biases perpetuate the racism and sexism that many work hard to obliterate. Scientists and researchers have just begun to evaluate the flaws of AI. Viewed as a way to augment or imitate human intelligence, it is safe to wonder, whose thinking is AI mimicking?

Flaws of AI

The “Robots Enact Malignant Stereotypes” study highlights how AI is currently perpetuating racism and sexism in our everyday lives. Researchers from the University of Washington have found that when given the command to scan blocks with people’s faces in them and then label a box “criminal,” the robots repeatedly labeled boxes with Black men’s faces as criminals.3 Frequently, some groups were overrepresented while others were underrepresented.

One of the reasons scientists say that AI has biases is because it “is being built in a way that replicates the biases of the almost entirely male, predominantly white workforce making it,” according to a New York Times article published just three years ago.4 It is important to consider that the people creating the current and new AI do not fully represent the population nationally, leaving room for those programming errors. There is a growing need for better representation in the humans that are engineering the software and technologies if we as a society plan to lean on technology more and more.

These built in biases can lead to missed career opportunities, a denial of admission into a school, inaccurate facial recognition, denial of rental applications or loans, jail sentences, etc., according to National Institute of Standards and Technology (NIST).5 The possibilities of AI changing our lives in a way that benefits us all is possible, but it also changes people’s lives for the worst. The lack of nuance that these machines are capable of have real effects on consumers.
AI also tries to direct consumers toward what they should like, buy, watch and even who to vote for, according to The Social Dilemma, a docuseries about AI intelligence. The Social Dilemma shows social media uses AI to determine how long we view an image or an ad, and the system will then keep showing us more of what the AI believes we like to consume. YouTube’s suggested videos section also uses AI-based content recommendations to maximize viewing times and predict what it thinks you want to view next. This programming was shown to recommend “increasingly extreme and conspiratorial content,” according to Quanta Magazine.6

How Is AI Programmed?

Robots and AI use machine learning software that can be trained on a dataset. That dataset can underrepresent or overrepresent a particular gender or ethnic group, as stated by NIST, which leads to layers of bias built into the software. Those layers include human bias, systematic bias and computational bias. This information can lend to a larger debate geared toward making STEM more inclusive and accessible to marginalized groups to strengthen the industry and create AI that is not biased. These biases also largely impact businesses like Google with biases showing up in Google Images and Google Translate, according to Nature. Luckily, researchers have already begun to look into ways to fix the current AI issues and create a way to make good AI.

How Can We Fix It?

As stated by Ted Kwartler, vice president of trusted AI at DataRobot in Forbes, better AI requires a multifaceted effort that runs across four distinct roles within the creation of software and AI involving AI innovators, consumers, implementers and creators. Kwartler recommends that companies educate their data scientists about responsible AI and how it looks. Transparency with consumers is another suggestion Kwartler shared: “Companies need to strive for explainability, so people can understand how AI works and how it might have an impact.”7 If consumers and users of AI are better able to understand what goes into creating it, they might have more of an opinion about how it is programmed, according to Forbes.

Beneficial AI

Researchers have begun to investigate another use for AI so that it benefits humans in functional ways. As stated in Quanta Magazine, these machines need a new way of “thinking” that seeks to satisfy human preferences and learn what those preferences are. Computer scientist Stuart Russell released a book on how to create beneficial AI, “Human Compatible,” in which he elaborates on the three laws of robotics. His version of the three laws are: “the machine’s only objective is to maximize the realization of human preferences; the machine is initially uncertain about what those preferences are and the ultimate source of information about human preferences is human behavior,” according to Quanta Magazine. This approach focuses on programming AI to learn human preferences instead of attempting to think for us and pursuing goals of their own. If the overall function of AI is to try to maximize the human experience making things easier for us, it could be a good idea to analyze how we are looking at using AI. Russell’s research at Berkley continues to test out different ideas and strategies to reprogram AI to truly be a benefit to us and less of a burden.

Good or Bad?

The reasoning for using AI and the ideas behind it aren’t inherently good or bad, but the way it functions in society can have bad outcomes. Addressing the biases and prejudices that are built into AI also requires addressing systemic racism and human biases. If these systemic issues go unchecked and continue to become built into the ever-growing technological society we live in, we will continue to perpetuate racism, sexism and discrimination. Technological advances like AI software and programming are now woven into the fabric of our daily lives and functioning and whether that is good or bad is subjective. But we cannot expect this artificial intelligence to be unbiased if the people creating and programming it are not unbiased themselves. AI is the now and it is the future; and it can be used to help foster a more inclusive and unbiased world.

1Verma, Pranshu, “These Robots were trained on AI. They became racist and sexist,” The Washington Post, July 16, 2022.
2Hundt, Andrew et. al., “Robots Enact Malignant Stereotypes,” FAccT ’22: 2022 ACM Conference on Fairness, Accountability, and Transparency, Association for Computing Machinery, June 2022.
3Verma.
4Metz, Cade, “Who Is Making Sure the A.I. Machines Aren’t Racist?” The New York Times. March 2021.
5“There’s More to AI Bias Than Biased Data, NIST Report Highlights,” National Institute of Standards and Technology, U.S. Department of Commerce, Mar. 16, 2022.
6Wolchover, Natalie, “Artificial Intelligence Will Do What We Ask. That’s a problem,” Quanta Magazine. Jan. 30, 2020.
7Marr, Bernard, “The Problem With Biased Ais (and How to Make AI Better),” Forbes, Sept. 30, 2022.

ABOUT THE AUTHOR: Markiesha Thompson is associate editor of Monitor.

Leave a comment

No categories available

No tags available