fbpx Min Kyung Lee Wins Grant to Improve Fairness in Artificial Intelligence (AI) | UT iSchool | The University of Texas at Austin

Min Kyung Lee Wins Grant to Improve Fairness in Artificial Intelligence (AI)

artificial intelligence

From online information curation and resume screening to mortgage lending and police surveillance—artificial intelligence (AI) systems are increasingly being employed to make high-stakes decisions. The hope is that machine algorithms are more capable of producing objective results with unrivaled economic efficiency. But there is a growing concern that even algorithmic systems designed by seemingly conscientious developers might not be as devoid of human biases as expected.

According to Dr. Min Kyung Lee, assistant professor at the University of Texas at Austin School of Information (iSchool), there are many ways algorithms can contain biases: of course, the human interacting with the machine may be biased or perhaps there are biases in the dataset that machines learn from—but there are also instances where the AI developers' concept of fairness doesn't align with the users.

“There are many, many different concepts of fairness,” Lee said. “The one a developer happens to choose may not be good for the context that the system will be deployed.” To investigate ways to close the gap between over-simplified algorithmic objectives and the complications of real-world decision-making contexts, Lee is collaborating with researchers from Carnegie Mellon University and the University of Minnesota-Twin Cities on a project entitled Advancing Fairness in AI with Human-Algorithm Collaborations, which was awarded $1,037,000 in grant funding from the National Science Foundation in partnership with Amazon.

“Our unique research angle is that we learn what is fair for the context where AI will be deployed. We understand the human concepts of fairness in that context and implement it in the AI system,” Lee said. “A lot of algorithmic work has focused on a small set of concepts or definitions of fairness, but systems that are used in the real world have to have those social and legal fairness concepts. To broaden those perspectives, we need to engage with the community of users to learn their fairness concepts.”

Lee’s research project enables close human-algorithm collaborations that combine innovative machine learning methods with approaches from human-computer interaction (HCI) for eliciting feedback and preferences from human experts and stakeholders. The team will develop algorithms and mechanisms to manage users’ acceptable fairness-utility trade-offs and detect and mitigate the human operators' biases, as well as develop methods that rely on human feedback to correct and de-bias existing models during the deployment of the AI system.

'Our unique research angle is that we learn what is fair for the context where AI will be deployed.'

“I’m really interested in developing participatory mechanisms for community members to (1) understand what’s going on in the AI system, and (2) have a say in the design process so that these systems are actually designed to be useful to the users,” she said. “This is the right time to do research because we’re still in an early stage so we can make a big difference in the way this technology unfolds over time and integrates into society. We’re taking an interdisciplinary approach and trying to make the systems more human-centered.”

Lee joined the iSchool in January 2020 and is excited to be part of the school’s growing interdisciplinary information program. Her project, still in its first year, falls under the National Science Foundation’s Program on Fairness in Artificial Intelligence (FAI).

Learn more about Min Lee's Research: