An Interdisciplinary Approach to AI Ethics Training
Data Science & AI
Sina Fazelpour is an assistant professor of philosophy and computer science at Northeastern University. His research centers on questions concerning values in complex sociotechnical systems that underpin our institutional decision making. He is a core member of the Institute for Experiential AI and co-founded the Intelligence, Data, Ethics and Society (IDEAS) summer institute for undergraduate students.
Sina recently sat down with PITUNiverse Editor Kip Dooley to share progress on the IDEAS summer institute, where undergraduate students learn from world experts on data science, ethics, computer science, philosophy and law about responsible development of data science and AI. The IDEAS institute is supported in its second year in part through a PIT-UN Challenge grant.
What is PIT-UN?
5 Keys to Institutionalizing PIT
Kip Dooley: Sina, you’re about to run the second cohort of an interdisciplinary summer institute on AI. How did the IDEAS institute come about?
Sina Fazelpour: The motivations were twofold. First, I have both a technical background in engineering and a philosophical background in the values of technology. AI is a sweet spot for me as a practitioner and educator because AI systems very clearly create both benefits and burdens, whether in the context of allocating medical resources or hiring or some other domain. It is always going to be a complicated issue. Technologists working on AI need to be able to ensure that these systems simultaneously work in ways that respect privacy, lead to just and fair outcomes, and are robust in their performance. This is a very complex task, and we really don’t yet have good models for how to do it well.
One of the key things missing from the puzzle is an interdisciplinary perspective. We cannot approach these problems from solely a technical perspective, nor solely a humanistic or philosophical perspective. A technical assessment without ethical considerations is insufficient, and you really can’t assess these systems well ethically without knowing at least some of the technical details. Interdisciplinarity is a key skill we need to cultivate for public interest technologists, but our institutions, generally speaking, are behind on this.
Most undergraduates interested in technology don’t receive the type of instruction that will prepare them to approach issues from an interdisciplinary perspective. Engineering students have to take an ethics course, but it’s usually focused on how you, as a professional engineer, can avoid breaking rules. They focus on what not to do. They don’t teach you what you ought to do in your practice as an engineer. What values should you consider when designing a product? What ethical processes should you embed in the entire process? We don’t train people how to do this, and that’s extremely problematic.
As a result, when we try to convene interdisciplinary teams (in academia or in industry), people often lack a shared language to even talk to each other. And perhaps even more fundamentally, they don’t know when they have to talk to each other. Engineers might come to a product launch thinking they are all done, only to find that some kind of ethicist or regulator is telling them how the product can or cannot be used. The engineers haven’t considered that throughout the design and development, they have made choices — their own choices! — that are permeated with certain values and ethical assumptions.
So the first motivation for the IDEAS institute was to make sure that we introduced this type of interdisciplinary way of thinking about values and technology at an earlier stage of development for our students, so that interdisciplinary thinking and dialogue is second nature for them by the time they graduate.
Sign up for the PIT-UN Newsletter
The second motivation was about broadening participation in the field of AI and technology development more generally. We know there are significant issues of underrepresentation of different groups, both in scientific disciplines and in the humanities. Both fields need to become more inclusive, and the environments more welcoming to different identities, value sets, and experiences.
Why? Well, if you pay attention to the headlines, you’ll know that the harms of technology are not equally distributed. They disproportionately fall on members of historically disadvantaged groups. We want to make sure that people who are particularly affected by emerging technologies are among those making the decisions about how they are developed, deployed, and governed. This could mean making technical decisions, making philosophical decisions, legal decisions, regulatory decisions — technology touches every aspect of society, which is what public interest technology is trying to grapple with. We want to enrich the decision-making pipeline.
For sourcing the guest speakers and creating this interdisciplinary program, did you already have connections with people in different disciplines? How did you bring together people from such a range of disciplines?
Coming from a very interdisciplinary background really helps. In my Ph.D. program at the University of British Columbia, I was in the Philosophy Department, but I was working with neuroscientists and computer scientists. My postdoc at Carnegie Mellon was in philosophy, but I had a secondary appointment in machine learning. So those relationships proved very helpful both in terms of guest speakers and in shaping the program.
But to be honest, in the first year when funding was scarce, I just invited a bunch of my computer science and philosophy friends to come stay at my place for the week. It was really thanks to the generosity of my friends, who were willing to spend their own money to travel here and stay with me.
We all need a little help from our friends. … How will the program be different this year? What do you hope to build on from the pilot?
On the final day last year, the students were so excited to take what they’d learned and to write a paper, make a video for social media, or design a product. I thought, “OK, the program needs to be two weeks.” The first week will provide the necessary technical background and also the philosophical background about fairness, justice, privacy, and in the second week they can work on group projects and presentations.
The Network Challenge funding will allow us to do two full weeks. It will be more impactful in terms of training, because the students will actually get to do something with the theoretical background.
We’ll also look to enrich the mentorship piece this year. Last year, we just had guest faculty; this year we’ll also have graduate students who will serve as mentors. Throughout the two weeks, the students will have time to talk to their mentors about their projects and also ask questions about what life looks like in academia or industry. They’ll have the opportunity to build networks.
We’ll also be inviting faculty from other PIT-UN schools, particularly ones that don’t have programs like this. Here at Northeastern, we have one of the highest densities of people working on the ethics of artificial intelligence. We want to share with others how to run these kinds of sessions, so they can create their own courses and programs and distribute this multidisciplinary ethics training across different types of institutions, not just the ones with a specialty like ours.