Skip to content

PIT in Practice: Carnegie Mellon

Building the Field of Responsible AI

The promises and perils of AI systems — an umbrella term for technologies that, trained on large datasets, can create words, images, video, and data based on human “prompts” — have recently jumped into the headlines. But technology scholars such as Ramayya Krishnan, dean of Heinz College of Information Systems and Public Policy at Carnegie Mellon University, have been thinking and working through them for decades. 

“The framework of consequential decision-making — examining situations where getting a decision wrong can have significant costs to society and individuals — goes back almost 30 years,” he says.

We’d been thinking about these challenges a long time before ChatGPT came along.

In the 2010s, automated decision-making tools proliferated across society, including in education, transportation, health care, policing, and other high-stakes settings. Krishnan and his colleagues saw an opportunity to translate their methods to the fast-growing field of AI. “There were all kinds of problems with these new automated systems, like misidentifying people with darker skin, he says. “In consequential decision-making, you consider many criteria, like equity, privacy, and transparency, so you don’t end up optimizing for efficiency only.”

Through Heinz College and the interdisciplinary Block Center for Technology and Society, Carnegie Mellon has grown an array of programs to fund research, prepare young technologists, and advise decision-makers to mitigate AI harms and guide its development and deployment towards the public interest. 

What is Public Interest Technology?

5 Keys to Institutionalizing PIT

What is PIT-UN?

Responsible AI

Faculty from several Carnegie Mellon colleges and institutes have developed public interest technology (PIT) courses and research since CMU joined the Public Interest Technology University Network (PIT-UN) in 2019. 

Through a 2019 Network Challenge grant, professor Christopher Goranson created the Policy Innovation Lab, a master’s-level course that trains students in how to design public interest technology services and places those with the most promising projects into summer fellowships to continue their work with external partners. In GovScan, a recent project, students developed an original generative AI tool to help public servants quickly find necessary information in vast government records.

Through a 2021 Network Challenge project, faculty from the Human-Computer Interaction Institute convened conversations with local community organizations and nonprofits to inform the development of PIT curriculum and develop best practices for community partnerships. A 2023 grant is underway to train social workers in how to recognize and mitigate AI failures in housing. 

These projects and more informed the development of the Responsible AI program at the Block Center for Technology and Society, where Krishnan is faculty director. The Block Center, founded in 2019, funds interdisciplinary research projects and connects them to policymakers to translate academic insights into real-world impact.

“We recognize that AI is a multidisciplinary field,” says Krishnan. “Carnegie Mellon has deep strengths in computer science, ethics, public policy, information, and business. Each of these schools was thinking about these questions from their own perspectives, and all these elements come together in the Block Center’s Responsible AI program.”

 Professor Jodi Forlizzi, a computer scientist who co-leads the program, recently testified before Congress about AI’s impact on the workforce. Her colleague Rayid Ghani is a data scientist who developed the Data Science for Social Good Summer Fellowship to teach data scientists how to use machine learning models to support health, housing, and other public interest issues.

In February 2024, the Block Center convened experts from academia, government, industry, and civil society to inform the National Institute of Standards and Technology’s approach to testing AI models for safety. This gathering, and “A Responsible Voter’s Guide to Generative AI in Political Campaigning,” are just the two most recent examples of how CMU is working to translate insights from academia into actionable recommendations for policymakers.

At the state level, the Block Center is collaborating with Pennsylvania Governor Josh Shapiro’s administration on opportunities to leverage faculty expertise and advisory support for the state’s Generative AI Governance Board, as well as fostering additional research support on generative AI usage.

 

Professor Hoda Heidari, co-lead of the Responsible AI program, introduces the concept of red teaming for AI models at a February 2024 event at CMU.

Preparing a Responsible AI Workforce

The Heinz College Career Center hosted a 2023 PIT Career Fair to build interest in public interest tech careers among students and regional employers. Conference sessions featured emerging opportunities at the intersection of technology and social impact, and 17 local and national employers from the private sector, civil society and all levels of government met with students to discuss employment opportunities. Three-quarter of surveyed students said they were likely to pursue professional opportunities in PIT.

At Heinz College, foundations of responsible AI are being embedded into the curriculum to ensure that leaders across business and the public sector are equipped to make AI work for their organizations. In addition to existing master’s-level coursework and an executive leadership program in AI, the school will launch a new master’s degree in AI Management in 2025. Students will learn to operationalize AI with a focus in ethical and data-driven deployment. 

While the problems posed by AI are not necessarily new in the world of technology, Krishnan says that the context we find ourselves in has made preparing a new generation of responsible technologists all the more critical. “The prior technologies were simply not as easily deployable and accessible as generative AI tools are today,” he says. “The need for responsible use of tech has become even more urgent.”

Subscribe to the PIT UNiverse Newsletter

Related Posts

We’ve seen enough AI failures to know that the tech is overhyped. But does that mean AI is a waste of time and resources – or are there applications of it we should claim and celebrate? Four public interest technologists weigh in.

Latanya Sweeney teaches students to recognize and address technology-society clashes through interdisciplinary coursework.

GAEIA: Building the Future of AI Ethics Data Science & AI May, 2023 Author: …