Will A.I. Go the Way of the Roomba?
Public interest technologists respond to AI hype
from the July, 2024 PIT UNiverse Newsletter
One year after the so-called Pause Letter, in which tech billionaires and leading researchers called for a pause on the development of artificial intelligence, notable tech journalist Julia Angwin argued in a May 2024 New York Times op-ed that “the question isn’t really whether AI is too smart and will take over the world. It’s whether AI is too stupid and unreliable to be useful.”
We’ve seen enough AI failures (some humorous, some horrifying) to know that AI certainly doesn’t deliver on its creators’ biggest promises. Angwin questions whether “we as a society should be investing tens of billions of dollars, our precious electricity that could be used toward moving away from fossil fuels, and a generation of the brightest math and science minds,” when the most notable outcomes are, as she sees them, “incremental improvements in mediocre email writing.”
But is Angwin overstating the case? From a public interest technology perspective, are there applications and upsides of AI that we should claim and celebrate? We put this question to public interest technologists from across disciplines who are experimenting and thinking through ways to design, utilize and govern AI in the public interest.
What is Public Interest Technology?
5 Keys to Institutionalizing PIT
What is PIT-UN?
Suresh Venkatasubramanian (Brown University)
I sensed frustration, annoyance, and even a smidgen of exultation in Julia Angwin’s article about large language models (aka AI). She’s an amazing tech journalist who, at ProPublica, The Markup, and now Proof News, has done the crucial and important work of methodically debunking the overheated claims made by technology providers. She has good reasons for asking whether the whole enterprise is bound to fail spectacularly or with a whimper. And she is correct that the hype machine around AI clouds our view of the technology itself.
"It’s important that we separate the claims about what the technology can do from the technology itself."
But her argument runs the risk of swinging the pendulum too far in the other direction from the moment of hype we are in. A tech company making overheated claims about technology is about as surprising as a chatbot hallucinating on a topic it has no business responding on.
However, that tech is indeed pretty impressive. Whether you believe a large language model is sentient or a stochastic parrot, the fact that I can ask GPT4 to clean up an introduction, or smooth out a choppy transcript, and get a plausible result, is very impressive. There’s a level of excitement and energy I see in how my students interact with LLMs and other generative AI systems that I remember feeling myself from the early days of the web.
It’s important that we separate the claims about what the technology can do from the technology itself. The claims are overheated, driven by agendas, and fueled by the promise of billions of dollars in profit. I have no patience for them. But as a computer scientist, what I’m interested in is how these systems are doing what they appear to be able to do, and what are the limits of their abilities. Because it is only in understanding how generative AI works, and where it fails, that we can start to build the technology that can really assist us in our daily lives and actually serve the public interest.
Let’s be skeptical about claims with no evidence, especially when made by entities that have a vested interest in making such claims. But let’s also be inquisitive and curious about AI, about what we can do with it – and what we can’t. That’s how we bring rigor and scientific thinking into a space that sorely needs both.
Read more: How Suresh Venkatasubramanian helped write the White House’s framework for AI governance (Fast Company)
Maria Filippelli (Data + Technology Consultant)
Innovations that have fundamentally changed economies, like the movable-type printing press or refrigeration or paper currency, have one thing in common: They were designed to solve a specific problem.
AI, however, is often presented as a solution without fully defining a problem. Julia Angwin’s framing of AI as “too stupid and unreliable to be useful” perpetuates this myth that AI is all-encompassing; it either does everything or it does nothing.
"AI is often presented as a solution without fully defining a problem."
That framing completely misses how people and organizations are incorporating AI into their lives and operations. The uses are broad: There are proactive ways folks use ChatGPT, like sourcing vacation itineraries, finding ideas for kids’ birthday parties, or generating computer code. We’re also seeing a negative reaction to it, with some employers asking applicants not to use generative AI tools to develop cover letters and other application materials.
Angwin’s article also leaves out what we should focus on with generative AI: transparency and accountability. As AI products continue to consume our data, and proponents tout false successes like how well AI products perform on the bar exam or how many new chemical compounds they produce, it becomes imperative that we interrogate how tech companies themselves are producing those results. Researchers were able to debunk those two claims, but how many claims fester without proof? Developers of AI should be clear about the effectiveness of their products, where they acquired their information, and how they counteract bias in their algorithms.
What might be most problematic of all the generative AI overhype is the blatantly false information it can produce, referred to as “hallucinations.” There’s currently a case in metro Atlanta addressing the legal recourse available to people when an AI product produces untrue and defamatory information about them. This civil lawsuit is an early test of how we can hold AI developers accountable for their products and encourage them to be more transparent about these products. Then we can move past the hype into pragmatic discussions on actual problems AI can solve.
Read more: Maria Filippelli on defining public interest technology (Data & Society)
Deb Donig (Cal Poly)
Julia Angwin’s article equates generative AI to the Roomba, a “mediocre vacuum robot that does a passable job when you are home alone but not if you are expecting guests.” In other words, generative AI gets the job done, but it does so haphazardly, without the polish one would expect from human-generated labor.
"Generative AI is a technology that — if we frame its utility and role correctly — will unlock more human creativity."
That might be true, but Angwin’s argument presents a false binary: Either technological products do things as well as humans and therefore replace human labor, or they are inept and useless. But technological products can also be seen to supplement human labor, not substitute for it. For example, I am a notoriously mediocre housekeeper, and my houseguests know not to expect too much. But the Roomba was never meant to replace my housekeeping; it was meant as a labor-saving device, allowing me to automate a repetitive task so I can concentrate on finer dimensions of housekeeping that require nuance.
This is not a small thing. ChatGPT is at its most entertaining when it writes a joke or a poem in the style of a favorite author. Less spectacularly, but more significantly, it can also comb through vast databases to identify key insights, providing an instrument of perception and prediction that can be used to tackle complex public interest problems. Take the example of the health care system, where the structure and utility of critical services leave many Americans without the ability to access care at all, and health care insurers have perverse incentives to withhold care or to inflate its costs to their benefits. Thoughtful, intentional applications of generative AI could productively disrupt this market in ways that help patients and doctors wrest control back from corporate interests, driving down costs and increasing accessibility to care.
In my own industry, education, outdated ways of writing instruction like the five-paragraph essay have been made obsolete by ChatGPT and should push teachers to reimagine what writing is: not a product, but a method for developing and articulating new ideas. The pressure that generative AI places on our teaching methods can help us revitalize or replace forms of teaching that have been ill-suited to the 21st century for a long time. Generative AI is a technology that — if we frame its utility and role correctly — will unlock more human creativity.
That said, Angwin is right to point out the danger this technology poses to labor, especially within our specifically American form of capitalism, which developed in the context of slavery and generally aims to drive labor costs down to zero (except for elite workers). Many employers will use this tool whenever they can to cut labor costs, in a way that will increase profits and won’t actually provide the oft-promised and romanticized “more time with family,” or “more time to read,” but rather will load yet more labor onto employees.
Hear more from Deb Donig on her podcast, Technically Human
Theodora Dryer (Water Justice & Technology Studio / NYU)
As a historian and critical policy analyst, I believe it’s essential for us to question the premise of Julia Angwin’s provocation, see the bigger picture, and go much further in our critique of AI.
First, what is AI? It is an amorphous label referring to a broad suite of data science, data acquisition, and computing processes used to make predictive interpretations or generate new textual and visual materials and data.
"AI futures are a powerful form of climate change denialism that take us further away from community, sharing resources, and living with the Earth."
Beyond its technical definitions, it represents constellations of power and economic relations that demand substantive critique and social movement. AI programs do not exist in the cloud, they operate through dual processes of abstraction and extraction here on Planet Earth across specific sites and localities. AI is therefore not a uniform project nor a political monoculture we can analogize to a singular piece of technology like the Roomba, any more than we could just turn off or “pause” its development.