Guest Lecturer Dr. Lucy Suchman Warns of Risks of AI in Battlefield Targeting, in Iran and Beyond

April 14, 2026
Image of a drone in flight

On March 10, only days into the widening conflict in Iran, Dr. Lucy Suchman, professor emerita at Lancaster University (U.K.) and a towering figure in the field of human-computer interaction, delivered a guest lecture to iSchool Professor David Gray Widder’s doctoral seminar on Political Economy of AI Supply Chains. Suchman’s lecture, trenchant as it was timely, focused on the risks and often unacknowledged limitations of artificial intelligence as used in warfare.

Lucy Suchman
Lucy Suchman, Professor Emerita, Lancaster University

Suchman based her lecture on her 2022 article “Imaginaries of omniscience: Automating intelligence in the US Department of Defense,” published in the journal Social Studies of Science. In the article, Suchman critiqued a “war apparatus that treats the contingencies and ambiguities of relations on the ground as noise from which a stable and unambiguous signal can be extracted” – willing oversimplifications, in her view, which lead in turn to “the continued destructiveness of US interventions and the associated regeneration of enmity.”  

Suchman updated this material to discuss the ongoing conflict in the Middle East, touching on the recent stand-off between the recently renamed U.S. Dept. of War (DOW) and AI company Anthropic over proper use of its software, as well as the Feb. 28 bombing of an Iranian elementary school, killing over 175 people, mostly young girls.

Suchman worked as a scientist at Xerox's famed Palo Alto Research Center (PARC) for 22 years. She made her name academically with a PhD dissertation at UC-Berkeley critiquing interfaces designed using a technique then known as “AI planning” – built, in short, for intelligent machines more than for humans. In recent years, she has turned her attention to AI use in warfare and is a member of the International Committee for Robot Arms Control. Her research is foundational to programs like the information studies PhD at the iSchool, where rigorous study of AI is combined with human-centered analysis and innovation.

Suchman’s is one of a series of important outside voices at the iSchool this semester. This spring, Widder’s Political Economy of AI Supply Chains seminar will also welcome guest lectures from UT American Studies professor Dr. Iván Chaar López, a distinguished expert in border technology; UCLA professor Dr. Miriam Posner, author of an upcoming book on the history of globalized supply chain systems; and UCSD professor Dr. Lilly Irani, an expert on gig workers and labor in tech ecosystems.  

David Widder
David Widder, Assistant Professor

“My goal is to help welcome our amazing PhD students into a global network of folks studying similar topics, and help them build their network with senior colleagues,” Widder says. “Dr. Suchman is a mentor and collaborator of mine and a distinguished voice across many of our areas of research in the iSchool, including human-computer interaction, AI, and critical studies of technology, so it made sense for her to be our first guest, and I am grateful for accepting my invitation to speak to my students.”

Suchman began the guest lecture, delivered via Zoom, by connecting personally with Widder’s PhD students, hearing a bit about each one’s area of specialization. Then she turned to a critique of the overconfidence in AI that she perceives in the DOW, which has described itself as working, through projects like Joint All-Domain Command and Control (JADC2), towards the goal of “simultaneously sensing, making sense of and acting upon a vast array of data and information … fusing and analyzing the data with the help of machine learning and artificial intelligence and providing warfighters with preferred options at speeds not seen before.”

Suchman critiqued this vision on both conceptual and practical levels. On the conceptual level, she pressed home a key idea that has animated her thinking for decades: that AI’s ability to understand and act in the world is “made possible through zones of ignorance regarding everything which is incomputable.” Practically speaking, she pointed as an example to a DOW AI system from the late 2010s, which was trained to recognize 38 categories of military targets, based on 150,000 training images. One category was an “ISIS pickup truck.”

“Here we have what is arguably an object that we could all recognize as a truck, but the designation of it as an ISIS pickup truck is not something that the object itself carries with it,” Suchman said. “We really need to question the premise that these categories are pre-existing and stable, and that the people who they capture have those identities, independent of these classificatory schemes and the political and economic interests that inform those schemes.”

Such questions are all the more compelling in light of the February 28 weekday-morning bombing of the Minab elementary school, in a building which had housed an Iranian military facility more than a decade ago. It remains under investigation whether the school was targeted with AI assistance or even by U.S. forces.  

The Minab school tragedy sheds a dark shadow over the dust-up just weeks earlier between Anthropic, developer of the popular AI platform Claude, and the DOW. “Anthropic insisted that its guidelines for the application of its technologies prohibited either mass domestic surveillance or fully autonomous weapon systems,” Suchmans explained. “Those guidelines were too stringent for the DOW, which insisted that it was not up to companies to decide how the military was going to use their technologies and has now declared Anthropic a supply chain risk.”

Suchman sees a different, graver risk at play, to long-established international rules of combat that help safeguard against escalating atrocities. Allowing AI systems to run targeting for military campaigns – or, worse, using AI as cover for campaigns of annihilation – could allow vicious cycles of violence with responsibility increasingly outsourced to machines.

“This matters to the legitimacy of warfare, because those distinctions, and particularly the distinction between those who are in combat and pose an imminent threat and those who are identified as civilians or otherwise out of combat, is absolutely the foundation of international humanitarian law,” Suchman said.

To conclude the lecture, Suchman responded to student questions on topics as varied as how AI warfare systems might be improved versus fundamentally reframed, how she made such a confident start to her academic career, and how students should balance developing critical frameworks versus familiarizing themselves with each new rapid advance in the field of AI. The response from students was electric, and they looked forward to meeting future guest lecturers with other illuminating points of view, speaking to the biggest issues of our time.

Students described the guest lecture series as a “huge opportunity” to be “quote-unquote in the room with someone” whose name they’d otherwise only see on a reading assignment.

“It's really interesting to see someone who’s had a long career in our field, who started off exploring things that people weren't asking questions about 40 years ago, and is able to connect to how it’s still relevant today,” added PhD student Haley Triem, who helped organize Suchman’s talk and served as interlocutor.  “It's inspiring to see, and it makes me feel like I can maybe do that.” 

News category

Share this content