About

I am an Assistant Professor at the Information, Risk and Operation Management Department at McCombs School of Business at the University of Texas at Austin. I am also a core faculty member in the interdepartmental Machine Learning Laboratory and a Good Systems researcher. I hold a joint PhD in Machine Learning and Public Policy from Carnegie Mellon University’s Machine Learning Department and Heinz College.

My research is focused on algorithmic fairness and human-AI complementarity. As part of my work, I characterize how societal biases encoded in historical data may be reproduced and amplified by ML models, and develop algorithms to mitigate these risks. Moreover, effective human-AI collaboration is often complicated by other factors, such as the fact that experts often care about constructs that are not well captured in the available labels. In my research, I aim to understand the limits and risks of using ML in these contexts, and to develop human-centered ML that can improve expert decision-making. 

I am currently a member of FAccT‘s Executive Committee. Previously, I served as co-Chair of Diversity & Inclusion for FAccT 2021-2022, and local arrangements co-Chair for WITS 2021. In 2017 I co-founded ML4D, which I co-led for five years.

Prospective students: If you are interested in working with me, please apply to the Information Systems track of the IROM PhD program and mention my name in your application. If you are a current student at UT Austin interested in collaborating, please send me an email.

News

03/23 – Our work on Social Norm Bias covered by The Daily Texan, UT News, and Big Ideas McCombs.

02/23 – Our work Time to Awakening and Self-Fulfilling Prophecies After Cardiac Arrest (w/ J. Elmer, M. Kurz, P. Coppler, A. Steinberg, S. DeMasi, N. Simon, V. Zadorozhny, K. Flickinger, C. Callaway) has been published in Critical Care Medicine.

01/23 – Our work Same Same, But Different: Conditional Multi-Task Learning for Demographic-Specific Toxicity Detection, jointly led by PhD student Soumya Gupta and MSc student Sooyong Lee (w/ M. Lease) was accepted to ACM WebConf’23 (formerly WWW).

12/22 – Our work Social Norm Bias: Residual Harms of Fairness-Aware Algorithms, led by PhD student Myra Cheng while she was an undergrad (!) intern at MSR (w/ L. Mackey, A. Kalai) was accepted for publication in Data Mining and Knowledge Discovery.

11/22 – Our work Toward Supporting Perceptual Complementarity in Human-AI Collaboration via Reflection on Unobservables (w/ K. Holstein, L. Tumati, Y. Cheng) was accepted to CSCW’23.

11/22 – Our work Imputation Strategies Under Clinical Presence: Impact on Algorithmic Fairness, led by PhD student Vincent Jeanselme (w/ Z. Zhang, J. Barrett, B. Tom) was accepted to ML4H’22.

11/22 – Our work Self-fulfilling prophecies and machine learning in resuscitation science (w/ J. Elmer) has been accepted for publication in Resuscitation.

10/22 – Our work Robust Human-AI Collaboration with Bandit Feedback, led by PhD student Ruijiang Gao, won the CIST’22 Best Student Paper Award.

10/22 – I am honored to have been appointed to FAccT‘s Executive Committee.

08/22 – Our paper More Data Can Lead Us Astray: Active Data Acquisition in the Presence of Label Bias, led by PhD student Yunyi Li and joint work w/ M.Saar-Tsechansky, was accepted to HCOMP’22.

07/22 – Our paper Algorithmic Fairness in Business Analytics: Directions for Research and Practice (w/ S. Feuerriegel, M. Saar-Tsechansky) has been accepted for publication in Production & Operations Management.

04/22 – Our paper Justice in Misinformation Detection Systems: An Analysis of Algorithms, Stakeholders, and Potential Harms, led by 1st-year PhD student Terry Neumann and joint work w/ S. Fazelpour, was accepted to FAccT’22.