Dr. Soo Young Rieh Named New Associate Dean for EducationSandlin, Anu  | May 22, 2019
The UT iSchool is delighted to announce that Dr. Soo Young Rieh has accepted the position as full professor and Associate Dean for Education beginning August 1st, 2019. Dr. Rieh will succeed Philip Doty as the iSchool's new Associate Dean for Education. Dr. Rieh and a new Associate Dean for Research, also starting this fall, are both new senior administrative staff and faculty positions created by Dean Eric Meyer in the fall of 2018. "Dr. Rieh is an exceptional scholar and an administrator with a proven track record in developing innovative cross-disciplinary teaching programs at the undergraduate and graduate levels; we are thrilled she will be joining us," said Dean Meyer announcing Dr. Rieh's appointment.
Dr. Rieh is an exceptional scholar and an administrator with a proven track record in developing innovative cross-disciplinary teaching programs at the undergraduate and graduate levels; we are thrilled she will be joining us.
Soo Young Rieh is currently an Associate Professor in the School of Information at the University of Michigan, where she served as Director of the Master of Science in Information (MSI) Program from 2014 to 2017. Her research areas include human information behavior, web searching behavior, human information interaction, and information literacy. She has made contributions to the field of information science by initiating and conducting research pursuing two distinct research agendas: credibility assessment in information seeking and searching as learning. She has examined how to help people find credible information in online environments and how people’s credibility assessments influence their information seeking strategies. In recent years, her research has focused on conceptualizing searching as a learning process through the concept of comprehensive search. She also has conducted research on assessing learning outcomes in web searching. Her long-term research goal is to design learning-centric search systems that facilitate human learning and support critical thinking and creativity by expanding searchers’ use of information-literate actions such as evaluating, comparing, and differentiating the value of information across multiple sources. Her research has been funded by the Institute of Museum and Library Services (IMLS), the MacArthur Foundation, and Brainly, Inc.
"It is such an honor to have been selected for this position," said Rieh. "I have long admired The University of Texas iSchool's human-centered research approaches to information and technology. With Dean Meyer, I am committed to moving the school forward by building and strengthening innovative educational programs."
Dr. Rieh is a recipient of several research awards including Best JASIST Paper Award (2005, 2011), ACM SIGIR CHIIR Honorable Mention (2019), ASIS&T Best Conference Paper Award (2010), ASIS&T SIGUSE Award for Best Information Behavior Conference Paper (2007, 2015), and Eugene Garfield-ALISE Doctoral Dissertation Award (2002). Before joining the Michigan faculty in 2002, Rieh was Human Factors Research Engineer at the Excite@Home Search and Directory Group. She received her Ph.D. in Information Science from Rutgers University.
Texas iSchool Welcomes its First Harrington Faculty FellowSandlin, Anu  | Apr 30, 2019
The University of Texas at Austin, School of Information welcomes its first Harrington Faculty Fellow, Dr. Casey Pierce, who recently accepted an appointment as visiting faculty member at the Texas iSchool for one academic year.
An Assistant Professor at the University of Michigan School of Information (UMSI), Pierce’s research focuses on the changing nature of work as it relates to technology, policy and knowledge sharing in organizations. In the past, she has examined enterprise social media use, offshoring work arrangements, and the role of technology in U.S. healthcare policy implementation.
Currently, Pierce is studying how telehealth/telemedicine platforms and policies impact clinicians’ work practices and professional identities. Her research addresses implications concerning how digital platforms shape new models of patient care and the rise of contingent work arrangements in the healthcare industry.
“As a Harrington Fellow, I plan to continue my research examining how telehealth impacts the profession of mental healthcare, new forms of digital labor, and healthcare policy. I am excited to collaborate with faculty at the UT iSchool,” she said.
Casey will be the first Harrington Faculty Fellow to be based at the School of Information. We are extremely grateful for the Harrington funding, and hope that Casey will be the first of many top young scholars to join our network of collaborators.
Pierce will serve as a Harrington Faculty Fellow beginning September 1, 2019 and ending May 31, 2020. Dean Meyer expressed his excitement about Pierce’s acceptance of the Fellowship. “We are thrilled to have Casey with us for one full academic year,” he said. “We look forward to working with her, and seeing her research strengthen the connections between Michigan and Texas.”
“Casey will be the first Harrington Faculty Fellow to be based at the School of Information. We are extremely grateful for the Harrington funding, and hope that Casey will be the first of many top young scholars to join our network of collaborators,” said Meyer.
The Harrington Faculty Fellows Program supports approximately five Fellows each academic year and brings top young scholars within eight years of their first tenure-track appointment to UT from other prestigious universities. Fellows visit the University of Texas at Austin to pursue their research and collaborate with colleagues. Although they have no teaching obligations, Fellows are welcome to conduct seminars. To maintain their connection to the University of Texas and other Harrington Fellows throughout their careers, all participants become lifetime members of the Harrington Fellowship Society.
As a visiting member of the University of Texas at Austin and School of Information faculty, Pierce intends to host a research symposium that brings together scholars and practitioners examining the changing nature of healthcare work. “The University of Texas is an exciting place to host this research symposium, given Austin’s thriving tech hub and the new Dell Medical School,” she said.
An interdisciplinary social scientist, Dr. Casey Pierce has received multiple awards for her work including best dissertation and best paper awards from the Academy of Management and International Communication Association. Her research has been published in top journals including Information Systems Research, Journal of Communication, and the Journal of the Association for Information Science and Technology. Pierce earned her Ph.D. from the Media, Technology, and Society program at Northwestern University, School of Communication, and her B.A. and M.A. from the University of Southern California. To learn more about her research, visit her website: www.caseyspierce.com.
Microsoft Research Partners with UT Austin, Texas iSchool for Microsoft Ability InitiativeSandlin, Anu  | Mar 29, 2019
Despite significant developments in the world of automated image captioning, current image captioning approaches are not well-aligned with the needs of people with visual impairments. People who are blind or with low vision share a unique and real challenge –their visual impairment exposes them to a time-consuming, and sometimes, impossible task of learning what content is present in an image without visual assistance. As such, these communities often seek a visual assistant to describe photos they take themselves or find online.
In an ideal world, a fully-automated computer vision (CV) approach would provide such descriptions. However, this artificial intelligence (AI) process is riddled with challenges. Not only is CV work missing images taken by this population, but people who are blind and with low vision are required to passively listen to one-size-fits-all descriptions of images to locate information of interest. In addition, CV algorithms often deliver incomplete or incorrect information. Because of these shortcomings, reliable image captioning systems continue to depend on humans to provide descriptions of photos to people with visual impairments.
Determined to find a way to improve image captioning for blind and low vision communities, Principal investigator and Texas iSchool Assistant Professor Danna Gurari and Associate Professor Ken Fleischmann believe there is a more efficient and effective solution that reduces human effort and produces accurate results for communities who are blind or with low vision. And they recently embarked on a new project to “design algorithms and systems that close the gap between CV algorithm and human performance for describing pictures taken by both sighted and visually impaired photographers.”
But the Texas School of Information professors weren’t the only ones thinking about how to improve image captioning for people who are blind or with low vision. A team of researchers at Microsoft Research recently announced a similar vision and goal –to train AI systems to provide more detailed captions that can offer a richer understanding, and more accurate representation of images for the blind or those with low vision. In light of this mission, Microsoft Research developed a new project called the Microsoft Ability Initiative.
According to Microsoft Research Principal Researcher and Research Manager Meredith Ringel Morris, “the companywide initiative aims to create a public dataset that ultimately can be used to advance the state of the art in AI systems for automated image captioning.”
After a competitive process involving a select number of universities, the search for an academic research unit with whom they could partner for the new venture came to an end when Microsoft Research chose The University of Texas at Austin, School of Information. The proposed work of Gurari and Fleischmann was the only project selected through this competition.
The Texas iSchool research team proposed two main tasks of (1) introducing the first publicly-available image captioning dataset from people with visual impairments paired with a community AI challenge and workshop, and (2) identifying the values and preferences of people with visual impairments –to inform the design of next-generation image captioning systems and datasets.
“The collaboration builds upon prior Microsoft research that has identified a need for new approaches at the intersection of computer vision and accessibility,” explained Morris.
The companywide initiative aims to create a public dataset that ultimately can be used to advance the state of the art in AI systems for automated image captioning.
The Microsoft Research team which includes Ed Cutrell, Roy Zimmermann, Meredith Ringel Morris, and Neel Joshi, plans to collaborate with UT Austin, School of Information over an 18-month period. Gurari and Fleischmann will lead the UT Austin team, which will also include three PhD students and one postdoctoral fellow.
The Microsoft Ability Initiative builds on the interdisciplinary team’s expertise in computer vision, human-computer interaction, accessibility, ethics, and value-sensitive design. Gurari’s team is experienced in establishing new datasets, designing human-machine partnerships, creating human computer interaction systems, and developing accessible technology. As co-founder of the ECCV VizWiz Grand Challenge in 2018, Gurari is skilled in community-building and has a previous record of success in creating public datasets to advance the state-of-the-art in AI and accessibility.
Fleischmann’s team offers complementary experience in the ethics of AI and understanding users’ values to inform technology design. Given his expertise in the role of human values in the design and use of information technologies, Fleischmann will lead the effort focused on uncovering the needs and values of people with visual impairments –which will ultimately inform the design of future image captioning systems.
The Microsoft researchers involved in this initiative have specialized experience in accessible technologies, human-centric AI systems, and computer vision. “Our efforts are complemented by colleagues in other divisions of the company, including the AI for Accessibility program, which helps fund the initiative, and Microsoft 365 accessibility,” explained Morris.
Dubbed “a collaborative quest to innovate in image captioning for people who are blind or with low vision,” Morris explained that “the Microsoft Ability Initiativeis one of an increasing number of initiatives at Microsoft in which researchers and product developers are coming together in a new, cross-company push to spur innovative and exciting new research and development in the area of accessible technologies.”
Gurari believes that the initiative “will not only advance the state of the art of vision-to-language technology, but it will also continue the progress Microsoft has made with such tools and resources as the Seeing AI mobile phone application and the Microsoft Common Objects in Context (MS COCO) dataset. It will also serve as a great teaching opportunity for Texas iSchool students.”
The Texas iSchool team will employ a user-centered approach to the problem, including working with communities who are blind or with low vision to improve understanding of their expectations of image captioning tools. The team will also host community challenges and workshops to accelerate progress on algorithm development and facilitate the development of more accessible methods to assist people who are blind or with low vision.
Gurari and Fleischmann explain that “this work can empower people with visual impairments to more rapidly and accurately learn about the diversity of visual information, while contributing to solving related problems including image search, visual question answering, and robotics.”
The Microsoft Research team launched the new collaboration with the Texas iSchool during a two-day visit to Austin in January. Morris noted that the Microsoft Research team came away from the meeting at The University of Texas at Austin, School of Information, “even more energized about the potential for this initiative to have real impact in the lives of millions of people around the world.” “We couldn’t be more excited,” she said.
The Texas iSchool professors share the Microsoft Research team’s excitement about their upcoming collaboration. “To be selected for this gift is a great honor,” said Gurari and Fleischmann. “We look forward to working with the Microsoft Research team over the months, and are eager to make progress with our shared goal –to better align image captioning systems with the needs of those who are blind or with low vision.”