Biases in AI are spreading and it’s time to fix the problem

0

Missed a Future of Work Summit session? Head over to our Future of Work Summit on-demand library to stream.


This article was written by Loren Goodman, co-founder and CTO at InRule Technology.

Traditional machine learning (ML) does only one thing: it makes a prediction based on historical data.

Machine learning begins by analyzing an array of historical data and producing what is called a model; this is called training. Once the model is created, a new row of data can be fed into the model and a prediction returned. For example, you can train a model from a list of real estate transactions and then use the model to predict the selling price of a house that has not yet sold.

There are two main problems with machine learning today. The first is the “black box” problem. Machine learning models make very accurate predictions, but they lack the ability to explain the reasoning behind a prediction in terms understandable to humans. Machine learning models simply give you a prediction and a score indicating confidence in that prediction.

Second, machine learning cannot think beyond the data that was used to train it. If a historical bias exists in the training data, then, if left unchecked, that bias will be present in the predictions. While machine learning offers exciting opportunities for both consumers and businesses, the historical data on which these algorithms are built can be laden with inherent biases.

The cause for concern is that corporate decision makers have no effective way of seeing the biased practices that are encoded in their models. For this reason, there is an urgent need to understand what biases lurk in the source data. Along with this, human-run governors should be installed as protection against actions resulting from machine learning predictions.

Biased predictions lead to biased behaviors and as a result we “breathe our own exhaust”. We continually rely on biased actions resulting from biased decisions. This creates a cycle that builds on itself, creating a problem that gets worse over time with each prediction. The sooner you detect and eliminate bias, the sooner you mitigate risk and expand your market to previously dismissed opportunities. Those who don’t address biases now expose themselves to myriad future unknowns of risk, penalties, and lost revenue.

Demographic Patterns in Financial Services

Demographic patterns and trends can also fuel other biases in the financial services industry. There is a famous example from 2019where web programmer and author David Heinemeier took to Twitter to share his outrage that Apple’s credit card is offering him 20 times his wife’s credit limit, even though they file joint taxes.

Two things to keep in mind about this example:

  • The underwriting process was found to be in compliance with the law. Why? Because there is currently no law in the United States regarding bias in AI, as the subject is considered highly subjective.
  • To train these models correctly, historical biases will need to be included in the algorithms. Otherwise, the AI ​​will not know why it is biased and will not be able to correct its errors. This solves the problem of “breathing our own exhaust fumes” and provides better predictions for tomorrow.

True Cost of AI Bias

Machine learning is used in a variety of applications that impact the public. Specifically, social service programs, such as Medicaid, housing assistance, or supplemental Social Security income, are coming under increasing scrutiny. The historical data these programs rely on can be plagued with biased data, and reliance on biased data in machine learning models perpetuates bias. However, becoming aware of the potential bias is the first step to correcting it.

A popular algorithm used by many large US-based healthcare systems to screen patients for high-risk care management intervention programs was found to discriminate against black patients because it was based on data on the cost of treating patients. However, the model did not take into account racial disparities in access to health care, which contribute to lower expenditures for black patients compared to similarly diagnosed white patients. According to Ziad Obermeyer, acting associate professor at the University of California, Berkeley, who worked on the study, “Cost is a reasonable predictor of health, but it’s biased, and that choice is actually what introduces a bias in the algorithm.

Additionally, a widely cited case showed that judges in Florida and several other states relied on a machine learning-based tool called COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) to estimate inmate recidivism rates. However, many studies have challenged the accuracy of the algorithm and found racial biases – even though race was not included as an input to the model.

Overcome prejudice

The solution through AI in models? Put people at the helm to decide when or not to take real action based on machine learning prediction. Explainability and transparency are essential for people to understand AI and why the technology makes certain decisions and predictions. By developing the reasoning and factors impacting ML predictions, algorithmic biases can be exposed and the decision can be adjusted to avoid costly penalties or harsh feedback via social media.

Businesses and technologists need to focus on explainability and transparency within AI.

There is limited but growing regulation and guidance from lawmakers to mitigate biased AI practices. Recently, the British government published a Ethics, Transparency and Accountability Framework for Automated Decision-Making produce more specific guidance on the ethical use of artificial intelligence in the public sector. This seven-point framework will help departments create safe, sustainable, and ethical algorithmic decision-making systems.

To unleash the full power of automation and create equitable change, humans need to understand how and why AI bias leads to certain outcomes and what that means for all of us.

Loren Goodman is Co-Founder and CTO of InRule technology.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including data technicians, can share data insights and innovations.

If you want to learn more about cutting-edge insights and up-to-date information, best practices, and the future of data and data technology, join us at DataDecisionMakers.

You might even consider writing your own article!

Learn more about DataDecisionMakers


Source link

Share.

Comments are closed.