I am an Assistant Professor at the Information, Risk and Operation Management Department at McCombs School of Business at the University of Texas at Austin. I am also a core faculty member in the interdepartmental Machine Learning Laboratory and a Good Systems researcher. I hold a joint PhD in Machine Learning and Public Policy from Carnegie Mellon University’s Machine Learning Department and Heinz College.

My research is focused on algorithmic fairness and human-AI complementarity. As part of my work, I characterize how societal biases encoded in historical data may be reproduced and amplified by ML models, and develop algorithms to mitigate these risks. Moreover, effective human-AI collaboration is often complicated by other factors, such as the fact that experts often care about constructs that are not well captured in the available labels. In my research, I aim to understand the limits and risks of using ML in these contexts, and to develop human-centered ML that can improve expert decision-making. 

I am currently co-Chair of Diversity & Inclusion for FAccT 2021-2022, and local arrangements co-Chair for WITS 2021. In 2017 I co-founded ML4D, and I now serve on its Steering Committee.

Prospective students: If you are interested in working with me, please apply to the Information Systems track of the IROM PhD program and mention my name in your application. If you are a current student at UT Austin interested in collaborating, please send me an email.


08/22 – Our paper “More Data Can Lead Us Astray: Active Data Acquisition in the Presence of Label Bias“, led by PhD student Yunyi Li and joint work w/ M.Saar-Tsechansky, was accepted to HCOMP’22.

07/22 – Our paper “Algorithmic Fairness in Business Analytics: Directions for Research and Practice” (w/ S. Feuerriegel, M. Saar-Tsechansky) has been accepted for publication in Production & Operations Management.

04/22 – Our paper “Justice in Misinformation Detection Systems: An Analysis of Algorithms, Stakeholders, and Potential Harms“, led by 1st-year PhD student Terrence Neumann and joint work w/ S. Fazelpour, was accepted to FAccT’22.

03/22 – I was interviewed together with Min Kyung Lee for UT Austin’s Bridging Barriers.

01/22 – Our paper “Diversity in Sociotechnical Machine Learning Systems” (w/ S. Fazelpour) has been accepted for publication in Big Data & Society.

12/21 – Our paper “Leveraging Expert Consistency to Improve Algorithmic Decision Support” was awarded Best Paper Runner-Up Award at WITS’21.

07/21 – Our research on bias in predictive policing covered by El País.

04/21 – Our paper “Human-AI Collaboration with Bandit Feedback” accepted to IJCAI’21 (joint w/ R. Gao, M. Saar-Tsechansky, M. Lease, M.K. Lee & L. Han).

04/21 – Our FAccT’21 Tutorial “Sociocultural Diversity in Machine Learning” is now available online (joint w/ S. Fazelpour).

1/21 – Our paper “The effect of differential victim crime reporting on predictive policing systems” accepted to FAccT’21 (joint w/ N.J. Akpinar & A. Chouldechova).

11/20 – I will serve as Diversity & Inclusion Chair for ACM FAccT 2021-2022.

10/20 – I was awarded a Google AI Award for Inclusion Research.