
Please join us in welcoming new Assistant Professor David Gray Widder, who starts at the University of Texas at Austin School of Information in December 2025. Widder arrives most recently from Cornell University, where he was a postdoctoral fellow at the Digital Life Initiative from 2023-2025. Prior to that, he received his PhD in computer science from Carnegie Mellon University, where his dissertation was titled “Ethics down the AI supply chain: playing with power.”
Widder’s research focuses on questions of responsibility and power in new technologies, with a special emphasis on artificial intelligence “There are two key throughlines to my work, both contributing to AI ethics,” he says. “First, I study epistemic practices in AI—what counts as knowledge, whose knowledge counts, and how we can broaden or rethink this when necessary. Second, I study broad political economic questions in AI, in particular examining corporate power and major funders in AI.”
Widder’s dissertation work sounds the alarm on industry practices that could allow for the development of potentially harmful AI systems. For example, developers are typically asked to consider potential harms in AI systems they work on, but Widder’s research shows how, because most AI products are the product of collaboration and often involve repurposing preexisting modules, AI developers often don't know and can't control how their work is used downstream, nor how the modules they depend on were built. Additionally, he finds that employee precarity and organizational dynamics can limit when workers can raise and discuss ethical issues. Widder’s findings are backed up by over 110 interviews with software engineers and fieldwork at NASA, Microsoft and Intel.
“Even when software engineering workers do develop ethical concerns about what they're building, they rarely have power to address these concerns,” he says. “My work suggests that most research to develop ‘fairer’ or otherwise more ethical AI will be ineffective, unless we also develop strategies to empower workers to make use of these ideas, or alternatively, keep Big Tech companies in check through regulation.”
Widder also has a research focus in open-source AI. He has published on the perils of open-source deepfake tools, which can be used for misinformation and gender-based harassment. He has also written on the dim prospects for open-source algorithms democratizing AI technology. “Even ‘open-source’ AI is not really ‘open’ because to build AI or use it at scale requires access to resources concentrated in the hands of just a few companies,” he says.
Most recently, Widder has been analyzing over 7000 U.S. military grant solicitations for AI research, to examine both the military’s goals for AI and how academic researchers are brought on board to serve those aims.
A computer scientist by training, Widder was drawn to iSchool as the “perfect place” for studying the role that computing plays in our world. He looks forward to being part of the iSchool’s substantial community exploring the promise and pitfalls of ethical AI. With Angela D.R. Smith, he is excited to explore how education for students and practicing technologists can help them think more critically about ethics in their work and career. He is also excited to join forces with iSchool professors James Howison, Nathan TeBlunthuis and Hanlin Li in what he describes as the most impressive concentration of open-source AI experts in the world, right here at the iSchool.
He’s also proud to be part of an academic enterprise at UT that is not only world-class, but also public-oriented. “As the son of high school teachers, the only reason I was to afford college was because of a state school—the University of Oregon—where I got a top-quality education that I could afford,” he says. “I wanted to be at a public university, so that I could give students the opportunity to get the kind of educational and research experience I did.”