A new technology enables near-real-time satellite imagery of the entire world. There are exciting possibilities, but tech optimism should not blind us to the potential harms. I was interviewed about potential risks of this technology for this article on New Scientist.
I was interviewed by Motherboard, the technology vertical at Vice, for their piece Algorithms Have Nearly Mastered Human Language. Why Can’t They Stop Being Sexist?
Our work on data-driven investigative journalism was awarded the 2nd place of Premio ¡Investiga! 2019, an award for investigative journalism in Colombia.
Data analysis by Benedikt Boecking and I led the courageous reporting of Jessica Villamil, from El Pais de Cali, about the killings of social leaders in the Colombian post-conflict.
We are very excited to announce the 3rd NeurIPS workshop on Machine Learning for the Developing World (ML4D)!
This year’s theme is ‘Challenges and Risks’: What can go wrong with ML4D technologies? How do we tackle this?
Check out the call for papers! Deadline to submit is September 13.
Our paper ‘What’s in a Name? Reducing Bias in Bios without Access to Protected Attributes’ won the Best Thematic Paper award at NAACL 2019!
In previous work, we studied the risks of compounding gender imbalances in occupation classification. We have also proposed algorithms for enumerating biases in word embeddings, and showed that widely used embeddings encode societal biases. Building on both these works, in our NAACL paper we tackle the question ‘how can we mitigate biases without requiring access to protected attributes?’ and explore ways of leveraging societal biases encoded in word embeddings towards this end.
I rewatch Hellman’s FAT*’18 keynote periodically, so I am very excited to see this year’s FAT* videos are already online!
And this year it includes my talk on our ‘Bias in Bios’ paper: