How Public Interest Tech Principles Can Shape the Future of Data Science and Artificial Intelligence

Higher Education and Generative AI

Public Interest Tech Principles Can Shape the Future
of Data Science and Artificial Intelligence

Data Science & AI

May, 2023

Public Interest Technologist Afua Bruce

Author: Afua Bruce is the founder of the ANB Advisory Group, co-author of The Tech That Comes Next and former Director of Engineering at New America Public Interest Technology. In early 2023, ANB Advisory Group conducted a scan of data science for impact programs at PIT-UN member institutions, and also conducted a review of data science projects that have received PIT-UN Challenge funding.

It has been more than a decade since Harvard Business Review declared the profession of data scientist to be the “sexiest job of the century.” Since then, we have seen industry embrace data science as businesses seek ways to differentiate themselves using insights and predictions based on data of their consumers, their markets, and their own organizations. Accordingly, research into data science has increased, and academic institutions have created a number of credentialed programs and research institutes for students and faculty. Data science’s ability to positively impact the speed of operations and efficiency of organizations has been proven. However, as many scholars, practitioners, and advocates have pointed out, that same speed and efficiency can also magnify social inequities and public harms.

Higher Education and Generative AI
An April 2023 PIT-UN webinar explored challenges and opportunities in higher education posed by generative AI

At the same time, the field of artificial intelligence has greatly expanded, as has its embrace by industry and the general public. AI now streamlines how organizations take notes, process payroll , recommend products to clients, and much, much more. Recent product releases and headlines about artificial general intelligence (the theoretical possibility that AI could perform any task humans can perform) have spurred a new round of conversations about how AI could transform human society — or destroy it altogether, depending on one’s perspective. 

With widespread use of AI, the workforce will certainly shift as some tasks and perhaps even entire jobs will be performed by AI systems. Many colleges have made significant investments in AI research programs. Many institutions have recognized the importance of training students in how to design and develop AI systems, as well as how to operate in a world where AI is prevalent. And once again, many scholars, practitioners, and advocates have warned that without more intentional and ethical designs, AI systems will harm, erase, or exclude marginalized populations.

The Intersection of Data Science, AI and Public Interest Technology

Data science and artificial intelligence are two separate, but related, computational fields. As Rice University’s Computer Science department describes:

While there is debate about the definitions of data science vs. artificial intelligence, AI is a sub-discipline of computer science focused on building computers with flexible intelligence capable of solving complex problems using data, learning from those solutions, and making replicable decisions at scale.

Data scientists contribute to the growth and development of AI. They create algorithms designed to learn patterns and correlations from data, which AI can use to create predictive models that generate insight from data. Data scientists also use AI as a tool to understand data and inform business decision-making.

Practically, at some institutions, data science and artificial intelligence programs are sometimes seen as competitors for talent and funding, sometimes seen as collaborators, and sometimes remain organizationally separate. As both data science and artificial intelligence garner more and more attention from universities, students, and employers, we must ask ourselves how to balance the promise and excitement of these fields with the need to develop the associated algorithms responsibly. When systems can automatically have an impact on who is eligible to be hired or promoted, who gets access to housing, or who can receive medical treatments, those designing the systems must understand how to approach problems with not just efficiency and profitability in mind, but also equity, justice and the public good. 

Public interest technology provides a framework to tackle these challenges. “By deliberately aiming to protect and secure our collective need for justice, dignity, and autonomy, PIT asks us to consider the values and codes of conduct that bind us together as a society,” reads an excerpt from PIT-UN’s core documents. 

What could it mean for designers and implementers of data science and AI to “advance the public interest in a way that generates public benefits and promotes the public good”? Public interest technology provides a way to ask, research, and address the following key questions:

  • How do technologists ensure the tools they design are deployed and governed responsibly within business, government, and wider societal contexts?
  • What data sets and training data are being used to design these systems? Do they represent the nuance of human populations and lived experience? Are they representative enough to ensure that analyses or predictions based on the data will be fair and just?
  • How do decisions made early in the data science life cycle affect the ultimate efficacy and responsiveness of systems?
  • How will acceptable accuracy rates be determined for different applications? 
  • Are there ways to turn on and off the algorithms as needed?
  • What accountability structures and auditing systems can be built to ensure the fairness of data science and AI algorithms across industries?

Examples of Public Interest Data Science and AI

Over the past several years, an increasing number of academic institutions have recognized the importance of applying data science and AI in the public interest and have created extracurricular clubs, classroom and experiential courses, and certificate and degree programs that train students to consider how data science and AI affect communities in different ways and how these tools can be designed and deployed in new, beneficial ways. 

A field scan by the ANB Advisory Group shows that students at PIT-UN institutions are learning vital historical context, working on interdisciplinary teams, and translating data insights into language that policymakers, community organizations, and businesses can understand.

For example, in Boston University’s BU Spark!, five program staff assign students to teams and manage semester-to-semester relationships with government agencies and nonprofit organizations. Students have used data science to conduct sentiment analysis of Twitter feeds for a national civil rights organization and regularly provide data analysis for the Boston City Council. Over 3,000 students have learned how to work with real-world, messy data, and how solving data problems can contribute to solving larger organizational or societal problems. In addition to technical courses, students learn critical sociological skills, such as how to understand race in the context of using census data. BU Spark! is one of many programs throughout PIT-UN members demonstrating that labs (including summer programs and practical courses) are an effective way for students to learn public interest tech ideas in real-world contexts and to practice co-design and co-development with affected community partners. 

Penn State’s “AI for Good, Experiential Learning & Innovation for PIT” program was one of a handful of PIT-UN grantees to train both college students and working professionals in the ethics and techniques of artificial intelligence. The program developed a new slate of experiential learning opportunities for college students, along with an online microcredential course for professionals in any sector. While it is important to train the next generation of technologists, we must also consider how to train today’s leaders and decision makers. 

Similarly, Carnegie Mellon University launched a Public Interest Technology Certificate program in 2022. Geared toward employees in all levels of government, the six-month program trained its first cohort in data management, digital innovation, and AI leadership “to create a more efficient, transparent, and inclusive government.” Training mid-career professionals while also building a PIT network that can inform and support their work can lead to real-world impact well beyond the walls of the university.

Key Lessons & Recommendations

These are just two of the many projects across PIT-UN applying a public interest framework to data science and AI challenges. And universities can do even more. Although the development and use of data science and AI differ, some of the application settings and opportunities to affect change have similar underlying challenges. Therefore, the following three recommendations can apply to both data science and AI programs.

1. Produce recommendations for policy work

As federal, state, and local policies and initiatives encourage the advancement of data, government agencies will seek not just support in accessing data, but also access to advanced data science tools to make data actionable. Miami Dade College, for example, worked with the nonprofit Code for South Florida, Microsoft, and the city of Miami to create a participatory web app that helps Miami residents become informed contributors to the city’s budget. In their 2019-2020 PIT-UN Challenge project, MDC created a GIS certificate course for underrepresented students to contribute to mapping the impacts of climate change.

Using data science to make clear policy recommendations or create policy products — especially in collaboration with other stakeholders — is a great way to provide students with experiential learning opportunities while also increasing the reach and impact of public interest tech’s core ideas. 

2. Define PIT competencies for data science and AI

As colleges and universities create and expand both data science programs and AI, both students and professors seek courses grounded in strong research and clear outcomes. Projects such as Georgia State’s Public Interest Data Literacy Initiatives have created individual courses that offer PIT frameworks for data science and AI. We are at a point where PIT-UN schools could collaborate to create an inclusive set of standard competencies. Such standardization could lend more credence and visibility to PIT degrees and could be a prototype for standards required of all data science and AI practitioners regardless of sector.

3. Structure meaningful internships & experiential programs

Students — and even faculty — seek practical experience that they can put on their resumes, describe to potential employers, and use to forge cross-sector partnerships. PIT-UN has consistently funded experiential learning projects to strengthen the pipeline of technologists who understand how to apply data science and AI in the public interest. 

Columbia University and Lehman College’s Public Interest Technology Data Science Corps placed college students in teams to use data science to support New York City agency projects to improve the lives of local residents. Ohio State University placed PIT fellows in state government to encourage young technologists to consider public service, while fostering a culture of collaboration between the public sector and academia. These are just two examples of how meaningful internships and experiential learning speak to the interests of students and faculty while growing PIT’s public reputation. 

Our Task Going Forward

The sustained interest in and excitement about both data science and artificial intelligence bodes well for the future of academic programs dedicated to these concepts. More significantly, the ways in which industries and community organizations operate will change and be changed because of advancements with these technologies. 

Making these changes more positive than negative, and actively reducing adverse disparities, will require sustained work, new ways of training practitioners, and usable recommendations and tools to shape a more just technology ecosystem. Public interest technology’s emphasis on equity and justice provides the necessary lens to guide the development and use of these technologies. As PIT-UN Program Manager Brenda Mora Perea reminds us, it is our job to keep these concepts at the center of all we do and to advocate for social responsibility at every stage of technology design, deployment, and governance. 

 

Navigating the Generative-AI Education-Policy Landscape

Professor Wesley Wildman, boston university

Navigating the Generative AI Education Policy Landscape

Data Science & AI

May, 2023

Professor Wesley J. Wildman

Author: Wesley J. Wildman is Professor in the School of Theology and in the Faculty of Computing and Data Sciences at Boston University. His primary research and teaching interests are in the ethics of emerging technologies, philosophical ethics, philosophy of religion, the scientific study of religion, computational humanities, and computational social sciences.

Professor Mark Crovella

Author: Mark Crovella is a Professor and former Chair in the Department of Computer Science at Boston University, where he has been since 1994. His research interests center on improving the understanding, design, and performance of networks and networked computer systems, mainly through the application of data mining, statistics, and performance evaluation.

Like many institutions, universities are struggling to develop coherent policy responses to generative artificial intelligence amid the rapid influx of tools such as ChatGPT. Higher education is not known for its ability to respond nimbly to changes wrought by emerging technologies, but our experience thinking through and forming policy at Boston University — in dialogue not just with administrators and faculty colleagues, but also, crucially, with our students — points toward an opportunity to step back and reassess what our goals are as institutions of higher learning and how we can best achieve them. In this article, we describe the policymaking process at BU and the implications each of us is thinking through as instructors of writing and computer languages, two domains that generative AI is poised to disrupt in major ways. 

Generative AI has catalyzed a rare degree of intense discussion about pedagogy and policies.

A recent letter signed by over 27,000 leading academics and tech entrepreneurs calls for a pause on advanced AI development. (It’s important to note that plenty of their colleagues have opted not to sign, or have critiqued the letter). There is indeed reason to worry about both widening economic disruption caused by generative AI and the arrogant or naive belief that the market will self-regulate and nothing too terrible can happen. And while the letter does raise awareness about the dangers of AI, the “pause” the letter calls for is highly unlikely; companies stand to lose too much market share, and countries too much competitive advantage in research and development, to step out of the AI race willingly. Furthermore, it offers little in the form of concrete steps to move responsible AI forward. 

It is against this background that universities are struggling to develop coherent, effective policies for the use of generative AI for text, code, 2D and 3D images, virtual reality, sound, music, and video. As institutions, universities tend to be conservative, multilayered and unwieldy, and well-suited to implementing strategic change over the long-term. They are not so good at adapting to rapid technological change. But this area of policy is particularly urgent, because the assessment of learning has critically depended for a long time on humans performing functions that generative AI can now accomplish — sometimes better than humans, sometimes worse, but often plausibly and almost always faster.

In other words, generative AI has catalyzed a rare degree of intense discussion about pedagogy and policies.

Co-Creating Policy with Students at BU

At Boston University, where we teach the ethics of technology, and computer science, respectively, only a few individual university units have had enough time to devise unit-wide policies (most existing policies are for individual classes). Our unit — the Faculty of Computing and Data Sciences (CDS) — started with a student-generated policy from an ethics class (the Generative AI Assistance, or GAIA, policy), which the faculty then adapted and adopted as a unit-wide policy.

Screenshot of the Generative AI Assistance Policy, created by BU students and adopted by the Faculty of Computing and Data Sciences
Screenshot of the Generative AI Assistance Policy, created by BU students and adopted by the Faculty of Computing and Data Sciences

The GAIA policy is based on several student concerns, expressed as demands to faculty.

  • Don’t pretend generative AI tools don’t exist! (We need to figure them out.)
  • Don’t let us damage our skill set! (We need strong skills to succeed in life.)
  • Don’t ignore cheating! (We are competing for jobs so fairness matters to us.)
  • Don’t be so attached to old ways of teaching! (We can learn to think without heavy reliance on centuries-old pedagogies.)

The GAIA policy also makes demands of students. Students should: 

  1. Give credit and say precisely how they used AI tools. 
  2. Not use AI tools unless explicitly permitted and instructed.
  3. Use AI detection tools to avoid getting false positive flags. 
  4. Focus on supporting their learning and developing skill set. 

Meanwhile, instructors should: 

  1. Understand AI tools. 
  2. Use AI detection tools. 
  3. Ensure fairness in grading. 
  4. Reward both students who don’t use generative AI and those who use it in creative ways.
  5. Penalize thoughtless or unreflective use of AI tools. 

The GAIA policy also explicitly states that we should be ready to update policies in response to new tech developments. For example, AI text detectors that are used to flag instances of possible cheating are already problematic, especially due to false positives, and probably won’t work for much longer.

The GAIA policy is similar to other policies that try to embrace generative AI while emphasizing transparency and fairness. It doesn’t ban generative AI, which runs against the student demand that universities should help students understand how to use such tools wisely. It doesn’t allow unrestricted use of generative AI tools, which runs afoul of the student demand for equal access and fair grading in a competitive job market. It is somewhere in between, which works for now. There are only so many ways of being in between.

The Role of Instructors in the Age of Generative AI

New technologies often create policy vacuums, provoking public interest ethical conundrums — just think of websites for sharing bootlegged music, or self-driving cars. What’s fascinating about the policy vacuum created by generative AI is how mercurial it is. You can throw a policy like GAIA at it and six months later the policy breaks because, say, AI text generation becomes so humanlike that AI text detectors no longer reliably work.

The big breakthrough in AI that led to the current situation was development of the transformer (the “T” in GPT). This approach to deep learning algorithms on neural nets was revolutionary and massively amped up the capabilities of AI text generation. There will be other, similar technological breakthroughs, and it is impossible to predict where they will come from and the effects they will have. Policy targets for generative AI are leaping all over the place like pingpong balls in a room full of mousetraps. Nailing down relevant policy won’t be easy, even for experts.

Educators face the prospect of generative AI short-circuiting the process of learning to think.

New technologies often create policy vacuums, provoking public interest ethical conundrums — just think of websites for sharing bootlegged music, or self-driving cars. What’s fascinating about the policy vacuum created by generative AI is how mercurial it is. You can throw a policy like GAIA at it and six months later the policy breaks because, say, AI text generation becomes so humanlike that AI text detectors no longer reliably work.

The big breakthrough in AI that led to the current situation was development of the transformer (the “T” in GPT). This approach to deep learning algorithms on neural nets was revolutionary and massively amped up the capabilities of AI text generation. There will be other, similar technological breakthroughs, and it is impossible to predict where they will come from and the effects they will have. Policy targets for generative AI are leaping all over the place like pingpong balls in a room full of mousetraps. Nailing down relevant policy won’t be easy, even for experts.

Professor Wesley Wildman, boston university
Professor Wesley Wildman teaches a Data and Ethics class at CDS on Tuesday, February 14, 2023. Photo by Jackie Ricciardi for Boston University

Consider writing. For centuries, we’ve been using writing to help young people learn how to think and to evaluate how well they grasp concepts. Writing is valuable as a pedagogical tool not just because of its outputs (essays), but also because of the processes it requires (articulating one’s ideas, drafting, revising). GPTs allow students to generate the product while bypassing much of the process. Accordingly, instructors need to be more creative about assignments, perhaps even weaving generative AI into essay prompts, to ensure that the value of the writing process is not lost. The GAIA policy is not merely prohibitive. It rewards people who choose not to use generative AI in ways that shortcut the learning process, while also rewarding people who use it in creative ways that demonstrate ambition and ingenuity.

Now consider coding. Surprisingly, the transformer mechanism that works so well to produce human-level language also works well to produce computer “language,” that is, code. Each programming language that students struggle to master — Python, Rust, Java, and all the rest — is just another system of tokens to GPTs. Generative AIs have shown stunning ability to synthesize working code from plain-language descriptions of desired behavior.
 
Just as with writing, so here: Educators face the prospect of generative AI short-circuiting the process of learning to think — in this case, computational thinking, which involves analyzing a problem and breaking down its solution into steps that can be implemented on a computer. This is a sophisticated skill that typically takes years to acquire, and no data scientist can be effective without it.
 
Generative AI will revolutionize the development of software. In fact, we believe most programming in the future will be done using generative AI, yet we’re also convinced that computational thinking will remain a vital skill. To begin with, we’ll need that skill to learn how to craft prompts that elicit the right kind of code from a generative AI. Engineering prompts for generative AI is an emerging domain of inquiry. We need to learn how to do it as practitioners and instructors, and we need to teach our students how to do it.
 

Some Useful Historical Analogies

We believe this computer-language example can help universities grapple with the sharp challenge that generative AI poses to the traditional role of writing in education. Like the software stack, generative AI is enlarging the “writing stack,” promising to eliminate a tremendous amount of repetitive effort from the human production of writing, particularly in business settings. This new world demands writing skills at the level of prompt engineering and checking AI-generated text — unfamiliar skills, perhaps, but vital for the future and difficult to acquire.

In educational settings, instructors produce the friction needed to spur learning in more than one way. We once learned to program in machine language, then in assembly language, then in higher-level programming languages, and now in code-eliciting prompt engineering. Each stage had its own kind of challenges, and we nodded in respect to the coders who created the compilers to translate everything at higher levels back to executable machine language. Similarly, we learned to write though being challenged to express simple ideas, then to handle syntax and grammar, then to construct complex arguments, then to master one style, and then to move easily among multiple genres. Now, thanks to generative AI, there’s another level in the writing stack, and eliciting good writing through prompt engineering is the new skill. Friction sufficient to promote learning is present at each level.

Maybe, just maybe, generative AI is exactly the kind of disruption we need.

It’s not a perfect analogy. After all, high-level coders really don’t need to know machine language, whereas high-level writers do need to know spelling, syntax, and grammar. But the analogy is useful for understanding “prompt engineering” as a new kind of coding and writing skill.

In this bizarre policy landscape, how should universities chart a way forward? How should we handle generative AIs that produce high-quality text, computer code, music, audio, and video — and overcome existing quality problems in a matter of months? That have the potential to disrupt entire industries, end familiar jobs, and create entire new professions, and that are also vulnerable to replicating the bias of our cultures in uninterpretable algorithmic behavior that is more difficult to audit than it should be?

Given the massive questions now facing us, a “pause” on advanced generative AI would be nice. But we cannot pause the impacts of generative AI in the classroom, and we are not convinced that eliminating generative AI from the learning experience is the right path.

We recommend that university leaders and instructors step way back and ask what we are trying to achieve in educating our students. To the extent that universities are merely a cog in an economic machine, training students to compete against one another for lucrative employment opportunities, making them desperate to cut corners and even cheat to inflate their GPAs, then generative AI threatens extant grading practices and undermines the trust that employers and parents vest in universities to deliver valid assessments of student performance.

But if universities are about building the capacity for adventurous creativity, cultivating virtues essential for participating in complex technological civilizations, and developing critical awareness sufficient to see through the haze of the socially constructed worlds we inhabit, then maybe we reach a different conclusion. Maybe, just maybe, generative AI is exactly the kind of disruption we need, prompting us to reclaim an ancient heritage of education that runs back to the inspirational visions of Plato and Confucius.

When students tell us they need our support to help them figure out AI, and warn us not to get stuck in our well-worn pedagogies, we think they’re doing us a great favor. We ourselves need to figure out generative AI and rethink what we’re trying to achieve as educators.

 

PIT & the Mission of Higher Ed

Sylvester Johnson, Faculty Fellow, Public Interest Technology University Network

PIT & the Mission of Higher Ed in the Digital Age

Institutionalizing PIT

March, 2023

Sylvester Johnson, Faculty Fellow, Public Interest Technology University Network

Author: Sylvester Johnson is PIT-UN’s Faculty Fellow, and Associate Vice Provost for Public Interest Technology at Virginia Tech, where he leads the Tech for Humanity initiative. 

Serving as faculty fellow for PIT-UN while also fulfilling roles as an associate vice provost and the director of the Tech for Humanity initiative at Virginia Tech, I wear many hats. It’s something that comes naturally to me as a transdisciplinary scholar. 

I discovered a deep interest in technology while leading a research team at Northwestern University that built AI that could scan and analyze a centuries-old humanities text. I chose to join Virginia Tech a few years later because of its role as a leader in technology that was also invested in being a comprehensive university – this at a time when most American universities have tended to reduce their investment and support for disciplines and programs in humanities, social sciences, and creativity. 

Torgerson Bridge, Virginia Tech University
Torgerson Bridge / Courtesy of Virginia Tech.

I was hired to establish and direct a center for humanities and to interpret the mission of humanities in a manner that might enrich the research, teaching, and engagement operating across the entire university. The university’s provost at that time, Thanassis Rikakis, underscored the urgency for humanistic, human-centered scholarship to play a central role at Virginia Tech at the very moment technology innovation was driving unprecedented growth, transformation, and uncertainty across the globe. 

As a scholar and administrator whose background is more in the humanities than the sciences (though I did earn my B.S. in Chemistry!), one of my most important words of advice to anyone looking for ways to institutionalize public interest technology is to seek out partnerships across the university that transcend our traditional academic divides. 

It Can Be as Simple as a Cup of Coffee

A handful of relatively feasible strategies can add tremendous value to efforts that advance PIT on any number of campuses. Of special importance is elevating awareness and building intellectual community among people who work across different areas of the university. How might this happen? Simply meeting with campus stakeholders within and beyond one’s own unit can happen for the cost of a cup of coffee. One or two conversations per month can help leverage the interests and concerns of others on one’s campus in ways that intersect with a larger PIT strategy. 

Another low-cost strategy: creating a “PIT” listserv that shares information about PIT-related issues and events (such as special webinars hosted by New America and other institutions), and inviting potential stakeholders and collaborators to join and post to the listserv. 

Hosting public conversations, research talks, or small workshops with on-campus researchers is yet a third way to elevate public interest technology and build intellectual community in a transdisciplinary fashion. All of these methods allow PIT liaisons to demonstrate interest in their colleagues, inform curious potential collaborators, and build a network of stakeholders who can become allies and partners for later stages of shared work across a college or university to transcend disciplinary divides.

Leverage the Potential of Networks

Our institution’s exploration of intra-disciplinary and transdisciplinary research provided a fortuitous runway for connecting to the PIT-UN. In 2018, we launched “Tech for Humanity” as a university-wide initiative to elevate existing work at Virginia Tech that embodied human-centered approaches to technology and to inspire and advance new efforts toward humanistic governance of technology. In 2019, we learned of the public interest technology-University Network that New America was administering. It was immediately evident that the PIT-UN was timely and extraordinarily resonant with the aim of Tech for Humanity. We applied to join the consortium and became members in 2020.

Technology is a comprehensive issue – not merely a technical one.

Since that time, the PIT-UN has created tremendous value and has amplified the possibilities emerging from Virginia Tech’s strategic vision and planning. The network has enabled VT to build relationships with other universities, to collaborate for advancing the emerging field of public interest tech, to advance thought leadership on technology issues, and to raise our university’s profile as a leader in this area. This has paid dividends in structural and programmatic ways–e.g., through our ability to attract talent for research and teaching. Beyond this, the PIT-UN has sharpened our external legibility as a comprehensive university, and it has enriched and deepened our faculty’s culture of collaboration across disciplines. 

For instance, the lens of public interest technology has created a new means to connect our librarians who determine data privacy policy within the university and scholars in humanities and human sciences who study policy, data ethics, and public affairs. As a further example, PIT has also connected specific, project-based teaching and learning in technical areas to curricular work in humanities centering on social disparities and equitable outcomes. As a result of this integration, one of our student teams recently collaborated to offer college-level, technology-enabled instruction to incarcerated students who are eager to advance their education. All of this has facilitated our efforts to operationalize a commitment to greater inclusivity, social justice, and public good. 

PIT & Humanities: Stronger Together

I would be remiss not to emphasize another area of vast importance by which the network has benefited Virginia Tech. The mission of PIT-UN, which advances public interest and civic benefit in a technological society, has provided external validation for the internal efforts at Virginia Tech to elevate the role of arts, humanities and social sciences for research, teaching, career paths, and societal impact. 

This has happened within a larger environment that is often harshly negative toward humanistic and artistic disciplines. Barely a month or two passes without a popular article lamenting the decline of humanities or questioning the relevance of humanistic studies in the United States.

The most grave technological threats lie at the human frontier of technology.

Faculty and administrators alike are accustomed to thinking about humanities through the mode of crisis. Legislative assemblies have spent decades defunding comprehensive education through a narrow focus on STEM skills. Parents frequently warn their kids away from majoring in humanities. As a result of these things, college students increasingly arrive on campus with the view that majoring in humanities or pursuing studies in creativity is a dead-end for career success. 

In this context, it is important to heed the message of technology leaders who have repeatedly warned against pitting specialized technical fields against generalist approaches to knowledge and education in non-STEM areas such as humanities, human sciences, and creativity. 

Among these is Scott Hartley, a successful innovation and technology entrepreneur who has authored The Fuzzie and the Techie: Why the Liberal Arts Will Rule the Digital World. Hartley studied political science in college before pursuing graduate studies in international affairs and in business. Throughout his career as a leader building businesses, technology, and civic infrastructure, Hartley has championed the role of liberal arts education in shaping technology leaders who are curious about high-level questions, broadly empathetic, and skillful in perceiving the larger context of problems they seek to solve. These are essential for addressing the most difficult challenges our technological society faces; they are also the very skills and sensibilities the liberal arts excel in cultivating.

PIT and the Purpose of Higher Ed

Public interest technology is emerging as a field that is enabling a broader range of stakeholders to understand that technology is a comprehensive issue–not merely a technical one. This is why it should come as no surprise that the most difficult technological problems to solve are in ethical, political, legal, and social domains. The most grave technological threats and harms, in other words, lie at the human frontier of technology. 

The American sociobiologist Edward O. Wilson was especially perceptive when he expressed almost 15 years ago that the fundamental problem with humanity is that we have “Paleolithic emotions, medieval institutions, and god-like technology.” Resolving these tensions will require the future of talent to draw on a vast array of knowledge and expertise, traversing technical, scientific, humanistic, and artistic domains. This is one of the most important messages that public interest technology is amplifying, and it is one that must be embraced by our current and future students and faculty and the larger society for the sake of our human future.

As our academic institutions increasingly engage with transdisciplinarity and problem-based learning, it seems clear enough that the field of public interest technology has become an especially potent and urgent means to enable our colleges and universities to fulfill their mission in service to all members of society – particularly those who are at greatest risk when things go wrong. 

As we continue to elevate the public interest as our north star for the governance of technology, let us work to ensure that the future of innovation can be one that ultimately serves the public interest, sustainability, and human flourishing.

Best Practices: Community Partnerships in PIT Work

Best Practices: Community Partnerships in PIT Work

Theme: Public and Critical Infrastructure

Author: Esther Han Beol Jang, PhD student in Information and Communication Technologies for Development (ICTD), Allen School of Computer Science and Engineering, University of Washington

Editor: Kurtis Heimerl, associate professor, Paul G. Allen School of Computer Science and Engineering, University of Washington

The Seattle Community Network, which is funded by PIT-UN as well as the city of Seattle, National Science Foundation, and others, explores the technological and social structures needed to develop community-held cellular infrastructure and runs seven network sites providing low-cost or free Internet access to community members. To do this, we work with local partners including government institutions such as the Seattle Public Schools and Tacoma Public Library, connectivity and digital equity initiatives such as the Tacoma Cooperative Network and Black Brilliance Research Project, and nonprofits like the Filipino Community Center. 

Working with community partners to make a positive impact on people’s lives is fundamentally necessary to centering research and technology in the public interest, and it’s one you may find to be the most challenging — and rewarding — part of the work.

Setting Expectations

Working with community partners can be the most challenging aspect of a PIT project. Like any other relationship-building activity, it can involve an intense amount of emotional labor, requiring empathy, patience, and strong communication skills. Sometimes partnerships will fall apart due to interpersonal conflicts, or because one or both partners does not have the emotional or operational bandwidth to support the relationship at a given time. These challenges are to be expected, because they have to do with people being human — anything from having clashing communication styles, to being busy or overcommitted, or being uncomfortable and not knowing how to express it. Being resilient to these kinds of interpersonal challenges is crucial for doing PIT research. Key moments for community-building can occur during both setup and crisis.

Establishing a Team

Foundationally, if you’re doing community work, your team needs a community-focused researcher. 

  • Have one or more researchers or project staff on the team who genuinely enjoy and find meaning in community- and relationship-building (not view it as a chore, obligation, or box to be checked) and who have the motivation, time, emotional energy, and interpersonal skills to undertake it. Ideally, this person or group should provide a long-term, consistent point of contact for community partners, although personnel transitions do happen and can be navigated with good communication ahead of time. It can be especially beneficial when individuals choose this role based on some aspect of their own identity that makes relationship-building particularly meaningful or rewarding for them. In-group identity can matter a lot in terms of initial trust, interpersonal response, or community access. We cannot emphasize enough how diversity in the research team can be a strength and provide unexpected opportunities.
  • Sometimes it can take a while (even years) for research team members and community partners to become aware of each other’s multidimensional strengths (and weaknesses). Community partners may be shy or intimidated by the researchers’ credentials and regimented methodologies and as a result not immediately come forward with their own experience, knowledge, connections, and other strengths. Early tensions around communication styles or other sources of conflict may also cloud people’s initial impressions of each other and block relationship-building. Creating space for more relaxed social and unstructured time together can help; over the years our collaboration has been strengthened by informal monthly happy hours.
  • When communicating digitally with partners, researchers should be persistent and patient, adaptable to a slower response cadence than in academia. They should also be open to different communications platforms (such as Discord or Messenger) if emails or online messages are not working for others. Trying phone calls, text messages, or even in-person visits will often be necessary. Importantly, these should be conducted without undue anxiety or urgency.

Challenging Communication Scenarios

Researchers communicating regularly with partners should always be ready to listen, affirm, and respond to expressed needs, concerns, or confusion without denying fault or being defensive. You can think of this as a “Yes, and” mentality. We used to keep a Post-it note on my desk with the phrase “They want to feel heard, respected, and loved” to make sure we always responded with the right attitude to the person on the other side of any written communication. 

In any conflict, however minor, it is important to hear and acknowledge feelings first, understand where unmet needs or negative emotions are coming from, and resolve any misunderstandings to reestablish good rapport before being able to move forward with joint problem-solving.

Potential for partner anxiety and conflict is heightened by power imbalances, which pose an inherent challenge to trust. Power imbalances are usually present when academics from powerful universities (especially those holding project resources and funding) interact with representatives from vulnerable or marginalized groups who have historically struggled to be heard or acknowledged.

The easiest and least conflict-ridden community partner relationships tend to be between institutions that feel on equal footing in terms of power and funding (for example, a university and a library or school system), often with the resources and experience to establish formalities such as a memorandum of agreement (MOA). 

Despite initial overhead, having a template MOA and being ready to establish one can be a good idea. However, getting to the level of mutual trust to write and sign such a document together can be daunting without having first built a comfortable working relationship, especially in the presence of power and capacity imbalances or personnel turnover (such as losing the trusted main contact on either side of a partnership). 

Be flexible and forgiving, and know that you may need to have a backup plan or pivot a project if one set of partners falls through.

Related Reading