Skip to content

The Chaos of ChatGPT Points to an Opportunity:
Bridge the Gap between Humanities and STEM

Institutionalizing PIT

March, 2023

Author: Deb Donig is an Assistant Professor of English at Cal Poly, a fellow at the Center for Innovation and Entrepreneurship, and a lecturer on faculty at UC Berkeley’s iSchool. She is the co-founder of the Cal Poly Ethical Tech Initiative and hosts the Technically Human podcast.

Toward the end of 2022, OpenAI rolled out a new kind of AI technology: ChatGPT. Since then, tech firms across the data AI space, driven by this new – and probably very lucrative– technological capacity have raced to create a competitor.

What they end up creating will be, predictably, less about what will be useful, or what people want, but rather what will be profitable. 

In the meantime, ChatGPT will change the structure and nature of any work that touches composition and writing; it will create new digital literacy problems; it will restructure labor and civic life in ways we can’t calculate or foresee. Some of these transformations will, no doubt, be good. Others, no doubt, will be spectacularly disastrous. 

It is said that prediction is the scourge of journalism and counterfactuals are sins in scholarship. But I’m a professor of English Literature who teaches speculative science fiction and futurisms, and once in a long while this title becomes useful, because the position allows me to perhaps indulge in a little of both.  

Tech Design Questions are Ethics Questions

I wasn’t in the room where OpenAI’s engineers and research scientists imagined and built ChatGPT, or where executives decided to launch it. But my guess is that among the many conversations that asked “can we build this? And how?” far fewer asked “should we build this? And why?” At least, outside of the parameters of answers that included, “It will be profitable.”

When we ask the questions: “should we build this? And why?” we’re asking crucial ethical questions. We are asking questions about what kind of world we want to live in, and whether or not the way we are imagining, building and creating, will get us to that world.

As an English professor who is deeply invested in building a more humane technological future, I want to make the case here for why it’s essential to bridge the gap between the humanities and STEM fields as we all seek to further institutionalize our PIT programs, programs that train technologists to ask these questions, and programs that train humanities students to understand the technologies so badly in need of us asking such questions about them. 

What if our technology was built on a multiplicity of intellectual backgrounds?

Often, I get asked why a professor of English Literature is engaged with questions of ethics and technology. To this question, I respond that before we build anything, we first have to imagine it. It’s worth our time investigating the terrain of our imagination to understand how and why we create as we do. There is no shortage of technologists building incredible, world-changing tools. But I wonder how many take time to imagine what kind of world we actually want to live in, what we think of as a “better world,” and why we think as we do. I also wonder how often those leading the imagining and production process think carefully about how the tech they’re building can help us get there.

Tech companies have plenty of smart people with great technical training, but they tend to work off of the same set of assumptions. For example, “tech is good if it works.” Whether a piece of technology “works” is an important question, but is just one of many questions we ought to ask, especially when designing tools like AI whose power we don’t fully comprehend yet.

When I first arrived at Cal Poly, I participated in a panel about ethics in AI. Just the first five minutes were devoted to discussing AI’s technical dimensions; the rest was devoted to debating the ethics of the technology. Conversations skidded across the surface of a very deep ethical tradition, one that many humanists spend decades immersed in to understand these questions and address them with clarity.

So, it felt rather odd to sit on a panel with highly trained technologists who asked these age-old ethical questions as though they were asking them for the first time, as though the humanists publishing on them, working through them, and thinking about them through the prism of that rich philosophical tradition, did not exist. 

This panel had a massive and formidable collective background in the technical intricacies of the subject that allowed them to understand its nuances in ways that most people can’t. They should obviously be at the table when discussing what AI can do, and what it might do. But if we truly want to ask the best ethical questions about those technologies, and if we are truly interested in the best ethical answers, we should dialogue with people who possess that same kind of formidable background in the humanities to understand the nuances of ethical questions in ways that most people can’t. Why not create a table that seats them too?

The Limitations of Efficiency

Our current discourse in PIT-UN rightly foregrounds important questions of diversity and inclusion. One line of reasoning behind the decision to foreground these values lies in the idea that we build and create better when we imagine inclusively; that the more various perspectives we include in the imaginative process and the collaborative building process, the better our products and outcomes will be. A diversity of perspectives allows those who think differently to put ideas to a kind of scrutiny only available when one dialogues with people who have distance from the truths one takes to be self-evident, some distance from the assumptions, biases, and visions one assumes to be neutral and inherently “good.” 

What would it look like if we had a technological environment that actually was built on a multiplicity of intellectual backgrounds? What if, in the design teams of tech companies, we had ethicists, psychologists, social workers, artists in addition to technologists? And what if we treated those various forms and diverse range of expertise with the same kind of gravity as we treat technical knowledge, such that the values and assumptions from each field were considered in the production of new technologies? We might move slower. But we might also break fewer things.

Is a more efficient world always a better world?

Every field has primary principles. One primary principle of engineering, for example, is that of efficiency. That’s the thing engineers are taught to drive towards. In this line of thinking, more efficiency is inherently good. Now, efficiency is an important value – but, again, it is only one value of many.

An engineer created Soylent as a “lifehack.” The purpose of the product? To create a maximally efficient nutrient delivery system, one that provides maximal nutritional benefit while eliminating the need to spend time preparing food or hunting it down. Good! Sometimes I want to eat quickly too (never mind that women and athletes have been using this “lifehack” for dieting and body building for at least a century, so maybe this engineer isn’t as much of a market disruptor as he thinks. But I digress.)

Think of all the reasons we eat food: to build community, to recall memories, to spend time with family. Sometimes we make a recipe because it’s been handed down from our mother, from our grandmother, from the cultures that our ancestors left behind as refugees, where the only thing they could take with them were those recipes,  – and recreating them brings back a visceral memory of the material reality of that once-loved, and forever-lost home. Perhaps you make a four-course meal for the person that you love most, to share a kind of intimacy that the lavish gustatory pleasure of breaking bread together allows. 

Efficiency matters. It matters greatly. But it isn’t the only thing that matters. 

Ethics is the Naming and Balancing of Competing Values

A key ethical principle lies in the idea that we don’t just have one value that always dominates–any serious ethical dilemma is principally about what we do when a situation puts those values at odds with one another in possible outcomes. Making an ethical decision requires us to understand which of those competing values we ought to elevate and to allow to dominate in a given context. 

Should we choose justice or love when it comes to deciding whether to accept an apology? Should we choose truth or mitigating harm against vulnerable populations when deciding questions of free speech on social media? In these moments, we select which value, at which moment, in the context of which problem, is the most important value in the hierarchy, understanding that prioritizing one value means limiting the claim of other values. 

Efficiency, we might say, is often orthogonal to, and in competition with, values such as “love” or “caring.” When I discuss the trade-offs we make between values, especially those of efficiency and love, I often ask my students how many of them are in a romantic relationship. A smattering raise their hands. Then I ask them how many of them hope to be in a romantic relationship one day. Most of the rest raise their hands. Then I ask, “how many of you want your partner to love you efficiently?” They all laugh – because who wants to be loved efficiently? 

So, we know this is true of ourselves. But when it comes to envisioning what kind of world we want to collectively build, I wonder how often we allow our technologists to pre-determine that our world would be a better world if it were a more efficient world. Many of our technologies – the ones we’re told will “make the world a better place” – conflate the term “better place” with “more efficient place.” But is a more efficient world always a better world? Especially when we bring that world into being at the expense of elevating love, or care? 

Faculty at Cal Polly
Courtesy of the Center for Expressive Technology at Cal Poly

The technologies we accept and often pay for, or allow to govern our social and civic life, seem to presume so. They’re profitable, partially because measuring efficiency is so much easier than measuring care or love, and technologies that have measurable outcomes are infinitely more profitable than technologies that don’t. But we should not conflate “better” with “profitable” or “efficient,” simply because we can measure these things. And values like “love” and “care” are valuable beyond measure. What would our world look like if we built technologies that optimized for them instead?

Of course, building toward the values of “love” and “care” is very difficult for many reasons beyond those that I have highlighted here. Most humanistic values aren’t really measurable, and we substitute proxies that can measure for those values at our peril. A chief benefit – and challenge – of humanistic thinking lies in the fact that we don’t necessarily need to apply the same kind of metrics, with precise and intransigently encoded numerical values, to ethical deliberations for them to be fruitful. Humanistic values have a shape, but they are what I call “open concepts;” their conceptual shape is indefinite, and we allow that shape to morph situationally. 

Students at Cal Poly
Image courtesy of the Center for Expressive Technology at Cal Poly

Here’s an example of what I mean. Let’s say that I have a pie, and I want to share that pie equally with my colleague. How should we split that pie equally? One answer might be: 50/50. We each get half. That’s a standard construction of equality. But what happens when my colleague is a 6’2,” 250-lb football player who just came out of practice. Is giving him half and keeping half still equal? I’m 5’1” and I work at my computer most of the day. Here, 50/50 is, in a sense, not “equal.” 

Let’s see what happens when we add variables and extra values: say that in addition to being 6’2” and a 250 lb football player, my colleague doesn’t get very much work done, and I end up having to do all the work to complete a collaborative project. And in this scenario, say that the pie is a reward for work. Should I now get more pie than he because I earned it, even if he needs more? That would violate a principle of “just desserts,” even if fulfills the justice principle of “to each according to his needs.” 

In each of these sets of circumstances, the numerical value (how much pie) must transform to meet the larger question of which values matter in each specific scenario; in the humanities, we allow that kind of transformation, and allow our terms (justice, or love, or care) a certain laxity, understanding that they are real, but not ultimately so–that their shape shifts infinitely and constantly.

Technically Human is a podcast about ethics and technology that asks what it means to be human in the age of tech.

Each week, Professor Donig interview industry leaders, thinkers, writers, and technologists about how they understand the relationship between humans and the technologies we create. 

Technologists don’t have this luxury. Technological products must encode values as numerical values; you can’t build a piece of hardware or software with infinite contextually dependent shape-shifting properties. Values must be transcribed into material forms, numerical variables, and mathematical formulas. You can have some variational bandwidth, but not the kind of infinite variations humanistic thinking allows.

This gives me great sympathy for technologists seeking to build technologies that accommodate and elevate a diversity of human values and needs, across a diversity of cultures, situations, and experiences. I think some humanities thinkers are worried that if we admit this sympathy or if we truly look at the complexity of developing technological products for the difficulty that it is,, we run the risk of dulling the force of our critiques with those facts. But I would argue that the humanities will actually be strengthened by a dialogue that allows technologists to respond to our critiques, and us to theirs. 

We should acknowledge that their work is circumscribed by the material reality of the tools they are building and profit structures beyond many tech workers’ control. As writers and social critics, we can lob critiques at big tech through op-eds all we want, but unless we grapple with the material realities of the industry and dialogue with the people who work in it, I’m not sure our critiques will ultimately reach the ends of making ethical change. We have to understand why and how technical products are the way they are, and lodge criticisms that encounter and understand the physical limitations of computer code and hardware, as well as the social, cultural, political, and economic forces that produce them. 

How PIT Reimagines the Role of Humanities

Over the past few years, I have been excited to see more STEM programs looking to integrate ethical questions in their courses, or provide ethics modules in their curriculum, both of which often draw from scholarship produced by humanities thinkers. 

But it’s really not fair to ask computer scientists to teach ethics, and it’s a real loss when the scholars who developed the foundations and frameworks for thinking about these problems don’t get to participate in teaching and deliberating them with students in the academy, or when the academy doesn’t provide a place for them to do this work. There are plenty of well-qualified humanities people who have spent years if not decades steeped in philosophy. We might include modules of ethical and humanistic thought in STEM curricula, but we might also build STEM modules into humanities curricula, and train a next generation of humanist technologists and technologically-informed humanists out of that model. 

I say this in an acute awareness that the model of education that has persevered in the humanities is facing, and will increasingly face, some serious pressure. While academics teaching in the humanities are very good at getting our students to ask deeper questions and to think critically about the major issues facing our times – like the ethics of technology, social justice, the history of ideas – we have, historically, been less consistent in asking the hard question of what happens to our students once they leave the university. 

The structure of undergraduate university education and its market value has been transformed over the last several decades; the structure of the humanities has largely not adapted to that change. Across the humanities, we are painfully aware that an undergraduate with a degree from our disciplines will likely have a harder time finding a job than that student’s peer in computer science – a reality that’s hard to square with our focus on social justice when we face the fact that many students, especially first generation students and students from underrepresented communities – take on astronomical debt to go to college. We have a duty to those students, who are less likely to have a social net or financial network when they graduate, to give them degrees that provide a secure future. 

The humanities still roughly follows a 19th century European model of education

Humanities programs have resisted the idea that their curricula should bow to the market. We long for the days when knowledge was good for the sake of it being knowledge, and learning was its own civic end. I personally get giddy at the idea of teaching Nabokov, who in his own words described literature as primarily about aesthetic “enchantment.” For Nabokov, the aim of reading it is to “grasp the individual magic of [sic] genius and to study the style, the imagery, the pattern of [sic] novels or poems.” The value of knowing and understanding that enchantment is beyond measure.

We can’t lose sight of the value of knowledge itself, but we also can’t lose sight of what that value will allow students to do, in practical terms. It isn’t an accident that a four year engineering degree allows a new graduate to acquire the exact skills required for an entry level engineering job; in the wake of the Second World War, STEM professors collaborated with industry to create a model of education that would allow for a four year curriculum to provide a graduate with the exact training required to enter into an industry role. It was a model concerned with meeting industry demands and in providing a professional pathway to students who lacked means, allowing them to  access the workforce and make a life for themselves to build security, stability, and family. 

Meanwhile, the humanities still roughly follows a 19th century European model of education that emerged when aristocrats went to university to become men of letters (and I use the word “men” because they were indeed all men). These men could afford to learn for learning’s own sake because they didn’t have to get a job with that knowledge – they were landed gentry or landed gentry-adjacent. They went to college to become interesting people at dinner parties. But this is not the 19th century, and our students are not aristocrats. Our students do not go to college to become interesting people at dinner parties – they go to college because, among the many other things that college provides (self development, learning, experiences), they seek credentials that will allow them access to the market economy. 

The ones who need this access the most are the ones who we hope will take our classes and whom we can help when we talk about the things that the humanities disciplines claim to care about, like social justice, equity, and inclusion. We have to think about how, and whether, the degrees that we are awarding, and the skills we will teach them, will allow them the access what they need to gain the means toward this kind of social transformation. The knowledge is simply not enough. We might like it to be, but it isn’t. Our students need skills and a credential that will allow them economic access. 

Where PIT Can Take Us

The vision of Public Interest Technology excites me because it offers a braiding of philosophical, civic, and technical knowledge. 

The humanities has a crucial opportunity, and an urgent need, in this particular moment, to create new models of education that will allow our students to work on technological products, to access the means of production in ways that can translate crucial thinking about values, ideals, the history of ideas, and the possibilities of imagination into technical products. 

I joined PIT because I believed in the kind of transformation it envisions. I think this infrastructure will help us imagine better, in ways more aligned with our human values. And it will give us the means and the institutional architecture to do so.