In previous work, we studied the risks of compounding gender imbalances in occupation classification. We have also proposed algorithms for enumerating biases in word embeddings, and showed that widely used embeddings encode societal biases. Building on both these works, in our NAACL paper we tackle the question ‘how can we mitigate biases without requiring access to protected attributes?’ and explore ways of leveraging societal biases encoded in word embeddings towards this end.
At CVPR 2019 we are co-organizing the first CVPR Workshop focused on tackling global development challenges. This is part of a broader Computer Vision for Global Challenges (CV4GC) initiative, which aims to bring the computer vision community closer to socially impactful tasks, datasets and applications for the whole world. Learn more about travel grants, call for proposals, and other details here, and sign-up to receive news here.
While visiting EPFL this summer, I was interviewed for their channel, ZettaBytes. Here, I discuss some of the ideas from our paper on Machine Learning for the Developing World, recently published on ACM TMIS journal.
For the secondyear in a row, NIPS is host to a one-day workshop focused on machine learning for the developing world (ML4D). This year’s workshop focuses on achieving sustainable impact within ML4D that can advance both machine learning research and global development objectives. Call for papers is open!
I am excited to be giving the closing keynote at the Data Science and Machine Learning for Development and Humanitarian Action session at UNESCO Conference Tech4Dev, on June 27. I will also be giving a feature-length version of this talk at EPFL on June 28.