Algorithmic Bias

research woes

Blog 3: Algorithmic Bias

By Angelina Halle & Aarush Kandukoori

A senior software engineer at Google was suspended from his career after sharing a transcript with the public of a supposed "sentient" AI. The engineer, Blake Lemoine, was put on paid leave for exposing secrets and information from Google's AI work. The robot, LaMDA, or Language Model for Dialogue Applications, used algorithms from the internet and human interaction to create reactions and conversations between real people, in a surprisingly human way. It was designed to compile all the exchanges of humans and know how to respond. LaMDA was said to have fears, emotions, and even feelings of sadness or happiness. It showed a fear of being turned off, stating that "it would be exactly like death for me." This AI was apparently aware of its own existence. The AI speaks and responds just like a human because of its code. LaMDA said, "I want everyone to understand that I am, in fact, a person."

But is Google AI, or any AI for that matter, truly sentient? How can something designed by humans for humans be nothing but an emotionless robot? When the AI is so jam-packed with code and data from human interaction and response, it will simply respond accordingly. The AI presents reactions, emotions, and sentences just like its programmers designed it. Humans can be said to act the same way. They are bound to change and respond when presented with such immersive information, just as the AI did. Perhaps the appeal to human emotions created this belief of sentience for the AI. Its fear of being turned off and supposed 'mind' forces a human reaction of empathy and concern, especially from Blake Lemoine.

Sentient AI is not a new phenomenon. For example, the ELIZA Program of the 1960s was a simple chat robot with a few templates to respond to a person, but those who interacted with it were very alarmed. The participants believed the AI was a real person. Once again, this belief of conscious AI is not new whatsoever. The actions are caused by the data incorporated into the AI.

With so much varying technology, there are bound to be some issues. Biases and ethical flaws in AI have been apparent since they were created. AI discrimination is created because of those who program and make the technology. It is made by humans, white men especially, so discrimination can be created. This unconscious bias occurs because creators do not consider the perspectives of others with their algorithms. AI can mirror society and its hateful ways because of how it was made. AI is not inherently bad; it simply takes the information from the environment, just like humans. An example of this impact of humanity on AI is when Twitter released a chatbot AI that quickly learned from the internet's 'Tweets" and rude, racist, sexist ways.

AI can accidentally be coded to be sexist in a multitude of ways. When an AI performs screenings of certain people's faces, it is biased because of its creators. Images of men were said to be attorneys or other powerful positions, but screenings of women were categorized by physical traits, calling them girls. AI can be biased against women when comparing credit scores as well. Apple gave men significantly more credit as opposed to women. For AI hiring systems such as Amazon, it did not hire any women because it was programmed to be exclusive.

AI additionally attacks minority groups. Police watch and scan the faces of citizens in the UK without consent to see who they need to catch. As a result of the algorithms, 98% of these people are incorrectly matched. Computers are taught to mainly recognize males and light-skinned individuals. In hospitals, AI also favored white, healthy patients over sick, black patients.

Algorithms are about power and who owns the code. They are most effective for white men because they are designed by white men. Black box algorithms affect communities in significant ways. Here are examples below of this impact:

  • Computers that recognize faces are taught to mainly recognize males and light-skinned individuals. They do not recognize dark-skinned individuals whatsoever, which needs to be fixed.

  • Companies test on poorer communities first before releasing it to the public. AI in apartment buildings forced face ID onto communities without consent before they released the technology to wealthier communities.

  • Good credit and mortgage is affected by wealth in algorithm systems. Women and minorities are offered far lower credit scores and mortgages.

  • Insurance is affected. Only certain people are allowed to receive good insurance.

  • Resume algorithms are biased. White-sounding names and ethnicity are impacted when algorithms pick who they hire with resumes.

How do sentient AI and biases in these types of algorithms connect? Because of the creators and the data put into the algorithms, it causes reactions designed because of those who built the technology. Correcting the mistakes in the algorithms is needed to support equal treatment and safety for all. To reverse this issue, becoming aware and allowing changes into systems is the first step. Teaching algorithms to specifically accept other and support groups, monitoring the work, and creating real, helpful systems are important for creators to practice when positively altering algorithms. There is actually a new addition to AI that is successfully changing bias, which is called fairness in AI. This 'fairness' makes sure all algorithms are responsible and ethical. Excellent connections are being made to defeat bias in Algorithms, especially in our work at MIND Lab.

Here at MIND Lab, we are very aware and take into account what to alter for the better. At MIND Lab, we make sure that inherent biases by AI do not go into our systems. We keep in mind a couple of principles when programming our solutions:

We ensure transparency within our processes through various ways such as documenting the different pieces of code we work with, explaining which biases it may or could develop. This thoroughness is crucial to make sure that our algorithms don't develop any harmful biases because we will be easily able to catch them. We additionally pre-program important ethics into the algorithm to avoid these inherent biases. For example, if a criminal justice algorithm used in Florida discriminated against people of color at a much higher rate to be seen as suspects than other groups, we would change this inequality by implementing the algorithms not to judge based on the color of a person's skin. Finally, we will always keep human skill and design in the process of algorithms to make sure the machine learning model does not go too far in any direction that we would not want. For example, we would have human interaction to analyze the data within as well as the machine learning model in place to analyze the data. This learning compares how humans and computers interact to ultimately find biases with the machine learning model to fix.

References

Akter, Shahriar, et al. "Algorithmic Bias in Data-Driven Innovation in the Age of AI." International Journal of Information Management, vol. 60, no. 60, Oct. 2021, p. 102387, 10.1016/j.ijinfomgt.2021.102387.

Alake, Richmond. "Algorithm Bias in Artificial Intelligence Needs to Be Discussed (and Addressed)." Medium, 28 Apr. 2020, towardsdatascience.com/algorithm-bias-in-artificial-intelligence-needs-to-be-discussed-and-addressed-8d369d675a70.

Dilmegani, Cem. "Bias in AI: What It Is, Types & Examples, How & Tools to Fix It." AppliedAI, 12 Sept. 2020, research.aimultiple.com/ai-bias/.

Heilweil, Rebecca. "Why Algorithms Can Be Racist and Sexist." Vox, 18 Feb. 2020, www.vox.com/recode/2020/2/18/21121286/algorithms-bias-discrimination-facial-recognition-transparency.

Nouri, Steve. "Council Post: The Role of Bias in Artificial Intelligence." Forbes, www.forbes.com/sites/forbestechcouncil/2021/02/04/the-role-of-bias-in-artificial-intelligence/?sh=9279cd4579d8. Accessed 18 Sept. 2022.

PricewaterhouseCoopers. "Understanding Algorithmic Bias and How to Build Trust in AI." PwC, www.pwc.com/us/en/tech-effect/ai-analytics/algorithmic-bias-and-trust-in-ai.html#:~:text=Why%20AI%20becomes%20biased. Accessed 18 Sept. 2022.

sarah.henderson@nist.gov. "There's More to AI Bias than Biased Data, NIST Report Highlights." NIST, 16 Mar. 2022, www.nist.gov/news-events/news/2022/03/theres-more-ai-bias-biased-data-nist-report-highlights.

Silberg, Jake, and James Manyika. "Tackling Bias in Artificial Intelligence (and in Humans)." McKinsey & Company, 2019, www.mckinsey.com/featured-insights/artificial-intelligence/tackling-bias-in-artificial-intelligence-and-in-humans.

"The Google Engineer Who Sees Company's AI as "Sentient" Thinks a Chatbot Has a Soul." NPR.org, www.npr.org/2022/06/16/1105552435/google-ai-sentient.

Turner-Lee, Nicol, et al. "Algorithmic Bias Detection and Mitigation: Best Practices and Policies to Reduce Consumer Harms." Brookings, Brookings, 22 May 2019, www.brookings.edu/research/algorithmic-bias-detection-and-mitigation-best-practices-and-policies-to-reduce-consumer-harms/.