Research

Research awards

Best Paper Runner-Up Award at INFORMS ISS Cluster, 2025.
Honorable Mention Award at ACM CHI ‘24 (top 5% of submissions).
Best Student Paper Award (student author: Ruijiang Gao) at CIST, 2022.
Best Paper Runner-Up Award at WITS, 2021.
Google AI Award for Inclusion Research, 2020.
EECS Rising Stars, 2019.
Best Thematic Paper Award at NAACL, 2019.
Microsoft Research Dissertation Grant, 2018.
1st Place Innovation Award on Data Science at Data for Policy, 2016.
Best Student Presentation Award at Domestic Nuclear Detection Office (DNDO) Academic Research Initiative Grantees Conference, 2015.

Working papers

Learning Complementary Policies for Human-AI Teams (with R. Gaoº, M Saar-Tsechansky). Best Student Paper Award, CIST’22. Major Revision, Management Science.

Improving Human-Machine Augmentation via Metaknowledge of Unobservables (with K. Holstein). Extended version of CSCW’23 paper. Major Revision, Management Information Systems Quarterly.

Using Machine Bias To Measure Human Bias. (with W. Dong, M Saar-Tsechansky). Major Revision, Management Information Systems Quarterly.

Human Discernment of Algorithmic Errors: A Case Study in Child Welfare (with R. Fogliato, A. Chouldechova). Extended version of CHI’20 paper. Major Revision, Management Information Systems Quarterly.

Looks Can Mislead: Feature-Based Explanations Do Not Reliably Improve Fairness of AI-Assisted Decisions (with J. Schoeffer, N. Kuehl). Extended version of CHI’24 paper.

Imputation Strategies Under Clinical Presence: Impact on Algorithmic Fairness (with V. Jeanselmeº, Z. Zhang, J. Barrett, B. Tom). Extended version of ML4H’22 paper.

Published works

Note on authorship: As faculty, I tend to follow Business School convention, which does not give special meaning to the “last author” and orders according to contribution. º indicates student co-author.

For full list: Google Scholar  

Selected Journal Papers

Leveraging Expert Consistency to Improve Algorithmic Decision Support.
Management Science, 2025.
with: V. Jeanselmeº, A. Dubrawski, A. Chouldechova.

Recovery Potential in Patients After Cardiac Arrest Who Die After Limitations or Withdrawal of Life Support.
JAMA Network Open, 2025.
with: J. Elmer, P. Coppler, C. Ratay, A. Steinberg, + 35 others.

Finding Pareto trade-offs in fair and accurate detection of toxic speech.
Information Research, 2025.
with: S. Guptaº, V.Kovatchev, A.Dasº, M. Lease.

Label Bias: A Persasive and Invisibilized Problem.
Notices of the American Mathematical Society, 2024.
with: Y. Liº, M. Saar-Tsechansky.

Time to Awakening and Self-Fulfilling Prophecies After Cardiac Arrest.
Critical Care Medicine, 2023.
with: J. Elmer, M. Kurz, P. Coppler, A. Steinberg, S. DeMasi, N. Simon, V. Zadorozhny, K. Flickinger, C. Callaway.

Social Norm Bias: Residual Harms of Fairness-Aware Algorithms.
Data Mining & Knowledge Discovery, 2023.
with: M. Chengº, L. Mackey, A. Kalai.

Self-fulfilling prophecies and machine learning in resuscitation science.
Resuscitation, 2022.
with: J. Elmer.

Algorithmic Fairness in Business Analytics: Directions for Research and Practice.
Production & Operations Management, 2022.
with: S. Feuerriegel, M. Saar-Tsechansky.

Diversity in Sociotechnical Machine Learning Systems.
Big Data & Society, 2022. with: S. Fazelpour.

Predicting Neurological Recovery with Canonical Autocorrelation Embeddings. PLoS ONE, 2019. With: Jieshi Chen, Peter Huggins, Jonathan Elmer, Gilles Clermont, Artur Dubrawski. [GitHub]

Machine Learning for the Developing World.
ACM Transactions on Management Information Systems, 2018.
with: W. Herlands, D. Neill, A. Dubrawski.

Analyzing medical image search behavior: semantics and prediction of query results.
Journal of Digital Imaging, 2015.
with: I. Eggel, C.E. Kahn, H. Müller.

Comparing image search behaviour in the ARRS GoldMiner search engine and a clinical PACS/RIS.
Journal of Biomedical Informatics, 2015.
with:  I. Eggel, B. Do, D. Rubin, C.E. Kahn, H. Müller.

Selected Computer Science Conference Proceedings

Should you use LLMs to simulate opinions? Quality checks for early-stage deliberation. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), 2026.
with T. Neumann, S. Fazelpour.

More of the Same: Persistent Representational Harms Under Increased Representation. In Advances in Neural Information Processing Systems (NeurIPS), 2025.
with: J. Mickel, L. Liu, K. Tian.

Perils of Label Indeterminacy: A Case Study on Prediction of Neurological Recovery After Cardiac Arrest. In Proceedings of the ACM Conference on Fairness, Accountability, and Transparency (FAccT), 2025.
with: J. Schoeffer, J. Elmer. 

A Critical Survey on Fairness Benefits of Explainable AI. In Proceedings of the ACM Conference on Fairness, Accountability, and Transparency (FAccT), 2024.
with: L. Deckº, J. Schoeffer, N. Kuehl. 

Explanations, Fairness, and Appropriate Reliance in Human-AI Decision-Making. In Proceedings of ACM CHI Conference on Human Factors in Computing Systems (CHI), 2024.
with J. Schoefferº, N. Kuehl.
Honorable Mention Award (top 5%).

Same Same, But Different: Conditional Multi-Task Learning for Demographic-Specific Toxicity Detection. In Proceedings of the ACM Web Conference (previously WWW), 2023.
with: S. Gupta, S. Lee, M. Lease.

Toward Supporting Perceptual Complementarity in Human-AI Collaboration via Reflection on Unobservables. In Proceedings of the ACM on Human-Computer Interaction, (CSCW), 2023.
with: K. Holstein, L. Tumatiº, Y. Chengº.

Imputation Strategies Under Clinical Presence: Impact on Algorithmic Fairness. In Proceedings of the 2nd Machine Learning for Health symposium, PMLR, (ML4H) 2022.
with: V. Jeanselmeº, Z. Zhang, J. Barrett, B. Tom.

More Data Can Lead Us Astray: Active Data Acquisition in the Presence of Label Bias. In Proceedings of the Tenth AAAI Conference on Human Computation and Crowdsourcing (HCOMP), 2022.
with: Y. Liº, M. Saar-Tsechansky.

Justice in Misinformation Detection Systems: An Analysis of Algorithms, Stakeholders, and Potential Harms. In Proceedings of the ACM Conference on Fairness, Accountability, and Transparency (FAccT), 2022.
with: T. Neumannº, S. Fazelpour.

Human-AI Collaboration with Bandit Feedback. In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI), 2021.
with: R. Gaoº, M. Saar-Tsechansky, L. Hanº, M. Kyung Lee, M. Lease.

The effect of differential victim crime reporting on predictive policing systems. In Proceedings of the ACM Conference on Fairness, Accountability, and Transparency (FAccT), 2021.
with: N. Akpinarº, A. Chouldechova.

A case for humans-in-the-loop: decisions in the presence of erroneous algorithmic scores. In Proceedings of ACM CHI Conference on Human Factors in Computing Systems (CHI), 2020.
with: R. Fogliato, and A. Chouldechova. [Video, Blog post summary]

What’s in a name? Reducing bias in bios without access to protected attributes. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics (NAACL), 2019.
with: A. Romanov, H. Wallach, J. Chayes, C. Borgs, A. Chouldechova, S. Geyik, K. Kenthapadi, A. Rumshisky, A. Kalai.
Best Thematic Paper Award.

Bias in Bios: A Case Study of Semantic Representation Bias in a High-Stakes Setting. In Proceedings of the ACM Conference on Fairness, Accountability, and Transparency (FAccT), 2019.
with: A. Romanov, H. Wallach, J. Chayes, C. Borgs, A. Chouldechova, S. Geyik, K. Kenthapadi, A. Kalai. [GitHub, Video]

What are the biases in my word embedding?. In Proceedings of the AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society (AIES), 2019.
with: N. Swinger, N. Heffernan IV, M. Leiserson, A. Kalai.

Selected Peer-Reviewed Workshops

Mitigating label bias via decoupled confident learning. ICML AI & HCI Workshop, 2023. With: Yunyi Li, Maytal Saar-Tsechansky.

Doubting AI Predictions: Influence-Driven Second Opinion Recommendation. ACM CHI Workshop on Trust and Reliance in AI-Human Teams (TRAIT), 2022. With: Alexandra Chouldechova, Artur Dubrawski.

On the Relationship Between Explanations, Fairness Perceptions, and Decisions
ACM CHI Workshop on Human-Centered Explainable AI (HCXAI), 2022. With: Jakob Schoeffer, Niklas Kuehl.

Social Norm Bias: Residual Harms of Fairness-Aware Algorithms. ICML ml4data Workshop Spotlight Talk & ICML Socially Responsible ML Workshop, 2021. With: Myra Cheng, Lester Mackey, Adam Kalai.

Lessons from the deployment of an algorithmic tool in child welfare. Fair & Responsible AI Workshop, CHI, 2020. With: Riccardo Fogliato, Alexandra Chouldechova.

Learning under selective labels in the presence of expert consistency. Workshop on Fairness, Accountability, and Transparency in Machine Learning (FAT/ML), 2018. With: Artur Dubrawski, Alexandra Chouldechova.

Lass0: Sparse Non­Convex Regression by Local Search. NIPS Workshop on Optimization for Machine Learning, 2015. With: William Herlands, Daniel Neill, and Artur Dubrawski.