Skip to content

Centering Human Relationships in the Age of AI

Remarks from Inioluwa Deborah Raji

2024 Tech for Humanity Prize Awardee

Deborah Raji is a Mozilla fellow and CS PhD student at University of California, Berkeley, who is interested in questions on algorithmic auditing and evaluation. She has worked closely with the Algorithmic Justice League initiative to highlight bias in deployed AI products, and Google’s Ethical AI team. Recently, she was named to Forbes 30 Under 30 and MIT Tech Review 35 Under 35 Innovators.

These remarks were given at the 2024 Tech for Humanity Summit, which convened thought leaders from civil society, government, academia and industry to develop shared visions for a humane and democratic technological future. See the full agenda and watch the recorded livestream here.

 

I am honored and thrilled to receive this award on behalf of so many collaborators, so many co-conspirators who have been supporting me, teaching me, guiding me, and mentoring me in this work for many years. 

I first started in this field on the engineering side. I had naively exited my undergrad degree in robotics in the summer of 2017 and began working at a computer vision company. It was my first real job on an applied machine learning team, and the entire time I worked at this company, I never stopped looking for myself. The first dataset I was given was a facial recognition dataset. I asked the engineer beside me if he, too, noticed that there was no one in the dataset who looked like me: that no one had darker skin, and that there were no dark-skinned women. 

2017 does not seem that long ago, but seven years is an eon in the world of AI. My colleague said something along the lines of, “that’s just the way things are. That’s just the way things are done in this field. It’s hard enough to get data, and we can’t think about these questions of representation and fairness and diversity.”

What is Public Interest Technology?

5 Keys to Institutionalizing PIT

What is PIT-UN?

That experience is exactly what led me to reach out to Joy Buolamwini at the MIT Media Lab and later the Algorithmic Justice League, and to work with Timnit Gebru and Margaret Mitchell, then at Google. These are the women who built me. I finally saw folks who were also concerned about these issues of bias and misrepresentation in these models, and, more importantly, were willing to work with me and guide me to develop the skills and the methodologies to examine these systems. 

Since that summer of 2017, people are much more aware of these issues. In 2018, we audited facial recognition systems for the first time and found that they did not work at all on those darker-skinned faces that had been excluded from the datasets. They especially did not work well for darker female faces. Since then, with financial support from Mozilla Foundation and the MacArthur Foundation, I’ve been very fortunate to participate in various projects that audit the failure of these systems in high-stakes contexts of law and medicine and content moderation to identify the ways these systems often fail for those who are overlooked, who are at the margins, who are not considered when setting the status quo. 

The most rewarding aspects of this journey have come through looking for myself in these rooms and these spaces. Recently, I started engaging on the policy front to address the issues and challenges I had as an auditor in dealing with corporate retaliation. I wanted to advocate for audit access and data protections for those who were doing this third-party investigation work into automated systems. That was where I met Alondra Nelson, who has been such an incredible mentor on the policy front. Again — coincidentally or not coincidentally — another Black woman very concerned about these issues and taking them as seriously as I felt they warranted.

Ralph Nader wrote in the mid-1960s about car crashes that were the result of a lack of regulation and control over the types of products sent out onto our roads. He shared stories of individuals whose lives were irrevocably impacted by these systems. One was Robert Comstock, who was a veteran garage mechanic whose leg was amputated after he was run over by a Buick that had been released without brakes. Whenever I talk about AI systems, I emphasize that these systems continue to collapse on the most vulnerable: Robert Williams, a Black man wrongfully arrested due to an incorrect facial recognition match. Carmelita Colvin, a Black woman falsely accused of unemployment fraud. Brian Johnson, also falsely accused of unemployment fraud because of an algorithmic failure, spent two years in appeals and had to file for bankruptcy. An early story I heard that was very personally resonant for me was Tammy Dobbs, an older woman with cerebral palsy who lost access to health care due to an algorithmic maladjustment and failure that was particular to her demographic and region. 

These are the life-threatening consequences of system failures in AI and facial recognition. It is my privilege to help build out the ecosystem of people who can hold these companies accountable for the impact their products have on real people’s lives. I’m incredibly honored to receive this recognition and to accept this award on behalf of all those I’ve been fortunate to work with in the algorithmic audit space. Thank you.