A long-held belief that there is a trade-off between accuracy and fairness when utilizing machine learning to make public policy decisions is being challenged by Carnegie Mellon University researchers.
Concerns about whether such applications introduce new inequities or amplify existing ones, particularly among racial minorities and people with economic disadvantages, have grown as the use of machine learning has increased in areas like criminal justice, hiring, health care delivery, and social service interventions. Adjustments are made to the data, labels, model training, scoring systems, and other components of the machine learning system to prevent this bias. The basic theoretical premise is that the system becomes less accurate as a result of these modifications.
In a new study, just released in Nature Machine Intelligence, a CMU team seeks to refute that notion. The trade-off was minimal in practice across a variety of policy domains, according to research by Rayid Ghani, a professor in the School of Computer Science’s Machine Learning Department and the Heinz College of Information Systems and Public Policy, Kit Rodolfa, a research scientist in ML, and Hemank Lamba, a post-doctoral researcher in SCS, who tested that assumption in real-world applications.
Actually, you can have both. To create just and equal systems, precision does not have to be sacrificed, according to Ghani. “However, it does necessitate that systems be purposefully created to be fair and egalitarian. Systems from the store won’t function.
Ghani and Rodolfa concentrated on scenarios in which there is a shortage of resources in high demand and resource allocation is aided by machine learning algorithms. The researchers looked at systems in four areas: prioritizing limited mental health care outreach based on a person’s likelihood of returning to jail to reduce reincarceration; predicting serious safety violations to better allocate a city’s sparse housing inspectors; modeling the risk of students not graduating from high school in time to identify those most in need of additional support; and assisting teachers in raising money for classroom needs through crowdfunding.
The researchers discovered that in each situation, accuracy-optimized models, a common method in machine learning, could accurately predict the desired outcomes but showed wide variations in suggestions for interventions. However, the researchers found that, depending on the circumstance, discrepancies based on race, age, or income might be eliminated without a loss of accuracy when they applied tweaks to the models’ outputs that were aimed at enhancing their fairness.
As policymakers and other researchers think about using machine learning in decision-making, Ghani and Rodolfa believe that their findings will begin to influence their thinking.
“We want the artificial intelligence, computer science, and machine learning communities to stop accepting this assumption of a trade-off between accuracy and fairness and to start intentionally designing systems that maximize both,” said Rodolfa. “We hope that policymakers will embrace machine learning as a tool in their decision-making to help them achieve equitable outcomes.”