top of page

Artificial Intelligence: Ethical Issues & Algorithmic Bias

Updated: Oct 10, 2022

If you’ve seen the 2020 film Coded Bias (it’s on Netflix), featuring Joy Buolamwini, you know a bit about how artificial intelligence has been immersed in our daily lives. Buolamwini, a researcher at MIT Media Lab, realized that AI face recognition was significantly more accurate in detecting white, male faces than any other category. Black, female faces had the lowest detection accuracy scores.


This same technology is sometimes used by the police to search and locate criminal suspects. This isn’t the only case where AI has led to biased and morally questionable consequences, which can be due to oversight in the code itself, its application, or inferences drawn from its application. This post is dedicated to exploring these issues.


You might have heard the term GIGO, or “garbage in, garbage out”, meaning that if you feed a computer bad data, it will return no better results. This is a major concern with AI and machine learning because it is based on whatever data sets it is given, so the decisions it makes may end up reproducing inequalities.


For instance, facial recognition developed by tech giants such as Microsoft, white male faces were predominantly used to train the program, naturally leading it to be more accurate at detecting those types of faces than others. It goes to show how the representation disparity between different demographics has made its way into the technology we all use.


Why is algorithmic bias bad?


Unsurprisingly, large corporations aren't the only ones using problematic AI. Law enforcement and the justice system have been found to use AI to guess how likely someone who commits a crime will commit another after release (known as recidivism). They then use these predictions to determine that person's prison sentence.

Image by Matthew Ansley

Arguments in favor of this system often state that using AI and machine learning to predict recidivism can be more accurate than human judgement. However, while AI does use information from the past to make its decisions, people provide and input that data. The risk assessment AI magnifies the implicit biases in the data the people provide it.


According to a report on the discovery of this practice, the AI has been very inaccurate in that only 20% of people it predicted would commit another crime actually did so. Further, the survey questions that the risk assessment AI uses (such as, "Was one of your parents ever sent to jail or prison?" or "How many of your friends are taking drugs illegally?") aren't directly related to the individual's crime.


Yes, they could be useful in providing the AI algorithm with data on past trends, but may not be ethical to use them in deciding someone's prison sentence. The AI may not take all pieces of information into consideration, such as someone who comes from a background of crime, but is trying to reinvent themself.


Another example of algorithmic bias is in important life decisions such as college, job, and loan applications.


Since 2014, Amazon had been using AI to scan through job applications to choose what the it thought were the most talented applicants. It worked by finding patterns and trends from past resumes that were either hired or not. Because, at that time, most workers were men, the AI learned that it should select more resumes from males to be hired because that was the status quo.


The discrimination went as far as to devalue resumes if it had any indication that it was from a female candidate, for example that the candidate graduated from an all-women's college or participated in women's sports.

Photo by Steve Johnson

The nerve-racking college admissions process has also not steered clear of AI. Some colleges have admitted to using AI to aid in the decision-making process by checking through students' GPAs, test scores, financial situation to predict how likely the student is to enroll if accepted. Because colleges want to increase their yield, making themselves look desirable, they may unfairly reject students whom the AI believes won't attend.


What's the root of the problem?


With AI and machine learning, it's easy to say that the program's code is the culprit, but that oversimplifies the issue: the code is written to make conclusions solely based on the data it is fed. There are no biases explicitly programmed into the AI, problems only arise when the data is bad.


AI can, however, form incorrect inferences through proxies, meaning that two measures of data that correspond to each other are substituted for each other. For example, the incomes of African Americans are, on average, lower than that of White Americans. Race becomes a stand-in for income.


If an AI model designed to approve loans saw this trend, it could mistakenly reject more African-American applicants, just based on race. It wouldn't have realized that if two applicants, one African American and one White American, with all other factors the same, they should be equally likely to be approved for the loan.


In the Amazon case, men, who had been hired more in the past, tended to use strong verbs like "executed" or "captured" more than women, creating a proxy. The algorithm looked for those types of verbs specifically and unfairly rejecting women because of it.


What can we do?


The big theme is that AI is being used, unethically or not, to save time and resources whenever possible.


So, how can we achieve efficiency while maintaining fairness? There are a few ideas out there:


  1. Regulate the usage of AI in choosing certain individuals over others and require quality checks for every AI model. This may be a very thorough method, but would probably be inefficient and could cause more hassle than the AI reduces.

  2. Ensure the data inputted into the AI model is as free of bias as possible. When given a dataset to work with, computer scientists can modify or possibly try to gather more data to increase equality in the representation of all demographics. They can also try different types of AI neural networks and choose the one with the least bias in its performance.

  3. Spread awareness and teach the next generation of computer scientists to watch for bias. Harvard created a program called Embedded EthiCS that combines ethical thinking with computer science. Because AI is becoming so integrated in our lives, we need to think about the implications of relying on it to make crucial decisions.


In the meantime, before these changes are fully implemented, we should be conscious of AI's influence over everything we do.


Questions to think about:


Has the use of AI crossed a line? Do we rely too much on AI?






58 views0 comments

Comments


bottom of page