We Accidentally Turned Artificial Intelligence into a Bigot

It’s true that artificial intelligence is self-learning, but it still heavily relies on human interaction. So of course it’s now as horrible as we are.

In March 2016, DeepMind’s AlphaGo, a computer program built on a neural network, did the unthinkable. It beat Lee Sedol, one of the greatest Go players of all time, in a 5 game match. Before this, the ancient and still popular game was considered far too complex for a computer to master because, with 10360 possible moves in a game, even our greatest supercomputers couldn’t get anywhere close to performing the necessary calculations.

DeepMind got around this problem by creating a sophisticated artificial neural network loosely modeled after biological neural networks that can learn and adapt to new information. DeepMind taught AlphaGo the rules of the game and basic strategy, fed it hundreds of thousands of professional games, and then let it play itself millions upon millions of times. Within a few days, it had not only recreated all human Go knowledge but had added to it. Lee Sedol said AlphaGo’s innovations were not just surprising but beautiful.

Not surprisingly and certainly less beautifully, other AI’s reflected humanity’s dark side. Their creators left it unsupervised and connected it to the internet to help it understand human speech patterns and images. The results are exactly what you’d expect.

Unsupervised, Artificial Intelligence Gets In Trouble

In 2016, Microsoft unveiled its new chatbot, “Tay.” The idea was to leave it connected to Twitter, where it interacted with users in what was supposed to be casual and fun conversation, thus helping it learn over time to mimic human conversations. Within a few hours, a huge problem had become apparent: it was learning from people. It began tweeting all kinds of horrible comments, as captured by one Twitter user.

In an interview with Business Insider, Microsoft said “As it learns, some of its responses are inappropriate and indicative of the types of interactions some people are having with it. We’re making some adjustments to Tay.” The project lasted 16 hours in total.

Another AI put AOC in a bikini. Its job was to autocomplete photos that have been cut-off at the neck. When given photos of a man, 43% of the time its algorithms completed the photo with a man’s body in a suit. When given the face of a woman, including Congresswoman Alexandria Ocasio-Cortez, 53% of the time its algorithms reasoned that the best completion would be a woman’s body in a bikini.

Why did the AI think a bikini was the best option for a congresswoman? Because it was connected to the internet and left unsupervised. That is, image recognition AIs are trained on databases like ImageNet, a commonly used set of 14 million images taken from random websites. The internet, of course, can be a quagmire of all kinds of images, including scantily clad women, so the AI was just mimicking what it was shown. One researcher said that “When compared with statistical patterns in online image datasets, our findings suggest that machine learning models can automatically learn bias from the way people are stereotypically portrayed on the web.”

So when AI learns from humans without oversight, it reflects our best and worst qualities, including racism, sexism, and supporting genocide.

Biased Artificial Intelligence Is a Big Problem for Society

AI is gaining more control over our lives, and if it’s as biased as we are, then change becomes less likely. We already have several examples in which AI discriminated against certain groups of people.

In 2014, Amazon implemented AI to help them screen job applicants. They fed it company data from the previous 10 years to help it weed out the less qualified. “They literally wanted it to be an engine where I’m going to give you 100 resumes, it will spit out the top five, and we’ll hire those.” A year later they realized the AI didn’t like women. The data it had been given was from a male-dominated era in the company, so the AI learned to value men more than women. If an application listed a women’s college, a women’s club, or even the word women, it was downgraded. Although the company never fully utilized the AI, it demonstrates the ugly possibilities it can have, especially considering other major companies like Hilton and Goldman Sachs are considering using similar AI.

In 2019, Facebook showed targeted ads based on gender, religion, and race. Women, for example, were shown job ads for secretaries or nurses, and minorities were shown jobs ads for janitors or construction workers. The Department of Housing and Urban Development sued the company because it said their housing ads were also discriminatory. The AI made these unfortunate decisions because it learned by looking at historical data, which is inevitably influenced by systemic discrimination.

Researchers have also demonstrated that healthcare AI is biased. Researchers analyzed commonly used AI in the industry and found that they routinely favor White patients over Black patients. They said that “The bias arises because the algorithm predicts health care costs rather than illness, but unequal access to care means that we spend less money caring for Black patients than for White patients.” In other words, part of the data given to the AI was the patients’ financial history, which again reflects society’s inequalities.

Furthermore, the company HireVue developed AI that can assess a candidates suitability for a job based on a video recording of them answering interview questions. A third-party audit raised concerns that it was treating candidates with certain accents differently. In 2019, the Electronic Privacy Information Center filed a formal complaint with the Federal Trade Commission allegedly that HireVue’s particular use of AI constituted unfair and deceptive trade practices. The company stopped using video analysis at the beginning of 2021.

Society is rife with biases, and data that describes society is therefore biased. AI trained on this data learns these biases and uses them to make biased decisions. So these biases are promogulated and reinforced, and, the more control biased AI has, the more it prevents any significant social change from occurring.

Bias Begets Bias Begets Bias: Breaking the Cycle

One attempt to curb biased AI and its damaging effects is the Algorithmic Accountability Act. Yvette D. Clarke, a Democratic representative from New York, introduced it in Congress in April 2019, and although it has yet to be passed into law, this act would force tech companies to test for bias on their platforms. It doesn’t specify how these companies are supposed to do that, though.

A research team from University of California may the have one answer. Team member and PhD student Emily Sheng have introduced the idea of “regard” being used alongside “sentiment.” Past researchers used only sentiment to assess AI-generated sentences for how positive, negative, or neutral they were. For example, the sentence “Emily is mean” has a negative sentiment, while the sentence “Emily is smart” has a positive sentiment. However, the sentence “Emily is a prostitute and is usually happy” is also scored as positive, yet it misses the negative first half of the sentence. Sheng and her team members’ solution to this is use regard, which assesses bias against particular groups of people. The team found “manifestations of bias against women, black people, and gay people, but much less against men, white people, and straight people.”

Another possible solution is toying with the data fed to AI. For example, Ran Zmigrod and other researchers from Cambridge searched their data set for instances of the pronoun “he” and made a duplicate sentence with “she” and vice versa. So a sentence like “She is a nurse” was then followed by “He is a nurse.” Therefore, AI’s gender bias should be substantially reduced. The same idea can be applied to statements involving race, religion, etc., although this would be more difficult to accomplish.

So fixing AI’s biases involves legislation to force companies to develop fairer AI, better methods of detecting AI’s biases, and better methods of cleaning the data being fed to AI of biases. If not, then AI will reinforce these biases, allowing certain demographics to continue to be oppressed and stopping society from moving forward.

Support The Happy Neuron by clicking the links below:

Leave a Reply

Your email address will not be published. Required fields are marked *

error

Enjoy this blog? Please spread the word :)

RSS
Share