Skip to content

The Role of Public Interest Technologists in an Age of AI Hype

Data Science & AI

May, 2023

Dr. Suresh Venkatasubramanian, Brown University

Author: Suresh Venkatasubramanian is a Professor of Computer Science and Data Science at Brown University. He recently served as Assistant Director for Science and Justice in the White House Office of Science and Technology Policy, where he helped co-author the Blueprint for an AI Bill of Rights. 

 

Since ChatGPT’s release in November 2022, there’s been a lot of discussion about the potentially world-changing implications of generative AI. As a computer scientist who co-authored the Blueprint for an AI Bill of Rights, I don’t see generative AI as a completely new or unusually dangerous threat. Despite extraordinary claims by tech entrepreneurs and some AI researchers that ChatGPT points to an inevitable evolution of general or sentient AI that could enslave or kill us all, I am not worried at this point about AI sentience. I am worried, however, that a critical mass of people will be made to worry about AI sentience, which will distract from the manifold ways that AI-powered systems are already causing harm, especially to marginalized and vulnerable populations – and the role that humans need to play in the regulation and revision of these systems.

The White House Blueprint for an AI Bill of Rights
Screenshot from https://www.whitehouse.gov/ostp/ai-bill-of-rights/

Prior to ChatGPT, many of the companies deploying AI tools framed artificial intelligence as a distinctively nonhuman system. We were told that these systems could synthesize and sort much more data than the human mind ever could, making them neutral arbiters of information free from the limits of human capability and the errors of human bias. It took a lot of work by researchers, advocates, and journalists to show exactly how and why this claim doesn’t hold up. AI facial recognition tools routinely misidentify black and brown people; AI hiring algorithms often exclude women and other historically marginalized groups; social media algorithms are optimized to sensationalize, not to inform or connect people. They are error-prone, and they often amplify patterns of bias in ways that are hard for us to see or understand.

How do we address these harms? By this point, we actually understand the harms quite well. The five main principles we outlined in the Blueprint for the AI Bill of Rights — ensuring system safety and effectiveness, protecting us from algorithmic discrimination, preserving the privacy and limited use of our data, demanding that systems be visible and explainable, and ensuring that we always have human alternatives, consideration, and fallback — represent our best understanding of how to protect people from the harms of unchecked and misguided automated systems.

And lawmakers are taking action on these concerns. State legislatures across the U.S., and across its political spectrum, are starting to experiment with legislation. As I recently wrote with two colleagues for the Brookings Institution, these bills “seek to balance stronger protections for their constituents with enabling innovation and commercial use of AI.” Regulatory agencies have also come out strongly in support of AI regulation. As Federal Trade Commission Chair Lina Khan put it, “There is no AI exemption to the laws on the books.”

A Troubling Rhetorical Shift

There is good momentum in both government officials’ and the general public’s understanding of technological harms. Since ChatGPT’s release, a much wider segment of the population than ever can see what researchers have been saying for some time now: AI-powered systems often deceive, obfuscate, and make unexplained and untraceable errors. 

But I see the proponents of AI taking on a new rhetorical strategy that threatens to derail us. They are now saying the exact opposite of what they said before: AI is not unhuman after all, but it is actually on its way to sentience.

AI sentience is a compelling story. But a good story doesn't make something real.

This shift in rhetorical strategy away from “AI is decidedly not human, and that’s why it is good,” to “AI could be sentient, and we should all be afraid!” threatens to co-opt our genuine collective concern and drive us in directions that don’t make sense, both technically and in terms of mitigating harm. Since ChatGPT was released, at least eight more studies on the harms of AI systems to minority communities have been published. People are losing health care coverage because of biased AI systems. Human lives are at stake right here and right now, but what are we talking about? Hypothetical threats from sentient AI — a technology that does not yet exist. 

AI sentience is a compelling story, one that builds on Hollywood depictions like The Terminator, or 2001: A Space Odyssey. But a good story doesn’t make something real, and we have to wonder why companies like OpenAI and Microsoft and Google that are jockeying for AI market share are creating what is essentially a misinformation campaign about AI sentience. Why are they so keen, all of a sudden, to depict AI as possibly sentient – and themselves as the only ones who can protect us from it?

Our Role as Public Interest Technologists

It is our job, as experts and public interest technologists, to be transparent about what is known — and not known — about generative AI and to seek to understand these systems the same way we researched and exposed the prior iteration of AI systems used for decision making. Our collective confusion about the risks and rewards of generative AI stems in large part from experts and trusted spokespeople doing the opposite, by making overconfident and partially or entirely unfounded claims (not unlike ChatGPT itself) about AI sentience.

Let us try to understand the large language models that undergird generative AI — how they appear to do in-context learning, why their behaviors appear to manifest at scale, and most importantly what their limits (as with any automated system since the Turing machine) are. 

Meantime, it is incumbent upon us as academics, researchers, teachers, and university leaders, to build on our hard-earned bodies of knowledge to encourage a more sane and action-oriented public discourse around the future of AI, lest we lose our footing and fall down a rabbit hole of science fiction-inspired musings. We are humans, and sorting through the nuances and ambiguities of complex systems is core to the human endeavor. We, not computer programs, are the ones with sentience. Let us put our minds to good use.