I was interviewed by Motherboard, the technology vertical at Vice, for their piece Algorithms Have Nearly Mastered Human Language. Why Can’t They Stop Being Sexist?
Our work on data-driven investigative journalism was awarded the 2nd place of Premio ¡Investiga! 2019, an award for investigative journalism in Colombia.
Data analysis by Benedikt Boecking and I led the courageous reporting of Jessica Villamil, from El Pais de Cali, about the killings of social leaders in the Colombian post-conflict.
We are very excited to announce the 3rd NeurIPS workshop on Machine Learning for the Developing World (ML4D)!
This year’s theme is ‘Challenges and Risks’: What can go wrong with ML4D technologies? How do we tackle this?
Check out the call for papers! Deadline to submit is September 13.
Our paper ‘What’s in a Name? Reducing Bias in Bios without Access to Protected Attributes’ won the Best Thematic Paper award at NAACL 2019!
In previous work, we studied the risks of compounding gender imbalances in occupation classification. We have also proposed algorithms for enumerating biases in word embeddings, and showed that widely used embeddings encode societal biases. Building on both these works, in our NAACL paper we tackle the question ‘how can we mitigate biases without requiring access to protected attributes?’ and explore ways of leveraging societal biases encoded in word embeddings towards this end.
I rewatch Hellman’s FAT*’18 keynote periodically, so I am very excited to see this year’s FAT* videos are already online!
And this year it includes my talk on our ‘Bias in Bios’ paper:
At CVPR 2019 we are co-organizing the first CVPR Workshop focused on tackling global development challenges. This is part of a broader Computer Vision for Global Challenges (CV4GC) initiative, which aims to bring the computer vision community closer to socially impactful tasks, datasets and applications for the whole world. Learn more about travel grants, call for proposals, and other details here, and sign-up to receive news here.