An Interdisciplinary Approach to AI Ethics Training

An Interdisciplinary Approach to AI Ethics Training

Data Science & AI

May, 2023

Sina Fazelpour

Sina Fazelpour is an assistant professor of philosophy and computer science at Northeastern University. His research centers on questions concerning values in complex sociotechnical systems that underpin our institutional decision making. He is a core member of the Institute for Experiential AI and co-founded the Intelligence, Data, Ethics and Society (IDEAS) summer institute for undergraduate students.

Sina recently sat down with PITUNiverse Editor Kip Dooley to share progress on the IDEAS summer institute, where undergraduate students learn from world experts on data science, ethics, computer science, philosophy and law about responsible development of data science and AI. The IDEAS institute is supported in its second year in part through a PIT-UN Challenge grant.

What is PIT-UN?

5 Keys to Institutionalizing PIT

Kip Dooley: Sina, you’re about to run the second cohort of an interdisciplinary summer institute on AI. How did the IDEAS institute come about? 

Sina Fazelpour: The motivations were twofold. First, I have both a technical background in engineering and a philosophical background in the values of technology. AI is a sweet spot for me as a practitioner and educator because AI systems very clearly create both benefits and burdens, whether in the context of allocating medical resources or hiring or some other domain. It is always going to be a complicated issue. Technologists working on AI need to be able to ensure that these systems simultaneously work in ways that respect privacy, lead to just and fair outcomes, and are robust in their performance. This is a very complex task, and we really don’t yet have good models for how to do it well. 

One of the key things missing from the puzzle is an interdisciplinary perspective. We cannot approach these problems from solely a technical perspective, nor solely a humanistic or philosophical perspective. A technical assessment without ethical considerations is insufficient, and you really can’t assess these systems well ethically without knowing at least some of the technical details. Interdisciplinarity is a key skill we need to cultivate for public interest technologists, but our institutions, generally speaking, are behind on this. 

When engineering students take ethics, it’s usually focused on on what not to do.

Most undergraduates interested in technology don’t receive the type of instruction that will prepare them to approach issues from an interdisciplinary perspective. Engineering students have to take an ethics course, but it’s usually focused on how you, as a professional engineer, can avoid breaking rules. They focus on what not to do. They don’t teach you what you ought to do in your practice as an engineer. What values should you consider when designing a product? What ethical processes should you embed in the entire process? We don’t train people how to do this, and that’s extremely problematic.

As a result, when we try to convene interdisciplinary teams (in academia or in industry), people often lack a shared language to even talk to each other. And perhaps even more fundamentally, they don’t know when they have to talk to each other. Engineers might come to a product launch thinking they are all done, only to find that some kind of ethicist or regulator is telling them how the product can or cannot be used. The engineers haven’t considered that throughout the design and development, they have made choices — their own choices! — that are permeated with certain values and ethical assumptions.

So the first motivation for the IDEAS institute was to make sure that we introduced this type of interdisciplinary way of thinking about values and technology at an earlier stage of development for our students, so that interdisciplinary thinking and dialogue is second nature for them by the time they graduate.

The second motivation was about broadening participation in the field of AI and technology development more generally. We know there are significant issues of underrepresentation of different groups, both in scientific disciplines and in the humanities. Both fields need to become more inclusive, and the environments more welcoming to different identities, value sets, and experiences. 

Why? Well, if you pay attention to the headlines, you’ll know that the harms of technology are not equally distributed. They disproportionately fall on members of historically disadvantaged groups. We want to make sure that people who are particularly affected by emerging technologies are among those making the decisions about how they are developed, deployed, and governed. This could mean making technical decisions, making philosophical decisions, legal decisions, regulatory decisions — technology touches every aspect of society, which is what public interest technology is trying to grapple with. We want to enrich the decision-making pipeline.

The IDEAS Summer Institute will take place in two locations this summer: Northeastern and UC San Diego.

For sourcing the guest speakers and creating this interdisciplinary program, did you already have connections with people in different disciplines? How did you bring together people from such a range of disciplines?

Coming from a very interdisciplinary background really helps. In my Ph.D. program at the University of British Columbia, I was in the Philosophy Department, but I was working with neuroscientists and computer scientists. My postdoc at Carnegie Mellon was in philosophy, but I had a secondary appointment in machine learning. So those relationships proved very helpful both in terms of guest speakers and in shaping the program. 

But to be honest, in the first year when funding was scarce, I just invited a bunch of my computer science and philosophy friends to come stay at my place for the week. It was really thanks to the generosity of my friends, who were willing to spend their own money to travel here and stay with me. 

 

We all need a little help from our friends. … How will the program be different this year? What do you hope to build on from the pilot?

On the final day last year, the students were so excited to take what they’d learned and to write a paper, make a video for social media, or design a product. I thought, “OK, the program needs to be two weeks.” The first week will provide the necessary technical background and also the philosophical background about fairness, justice, privacy, and in the second week they can work on group projects and presentations. 

The Network Challenge funding will allow us to do two full weeks. It will be more impactful in terms of training, because the students will actually get to do something with the theoretical background.

We’ll also look to enrich the mentorship piece this year. Last year, we just had guest faculty; this year we’ll also have graduate students who will serve as mentors. Throughout the two weeks, the students will have time to talk to their mentors about their projects and also ask questions about what life looks like in academia or industry. They’ll have the opportunity to build networks. 

We’ll also be inviting faculty from other PIT-UN schools, particularly ones that don’t have programs like this. Here at Northeastern, we have one of the highest densities of people working on the ethics of artificial intelligence. We want to share with others how to run these kinds of sessions, so they can create their own courses and programs and distribute this multidisciplinary ethics training across different types of institutions, not just the ones with a specialty like ours. 

 

To learn more about the IDEAS Institute, visit their website, or the website of Sina Fazelpour.

The Legacies We Create with Data and AI

Professor Renee Cummings

The Legacies We Create with Data and AI

Data Science & AI

May, 2023

Professor Renee Cummings

Renée Cummings is a Professor of Practice at the University of Virginia focused on the responsible, ethical and equitable use of data and artificial intelligence (AI) technology. She has spoken to groups around the country and the world, from local school districts to the EU Parliament about the risks and rewards of AI.

She recently sat down with PITUNiverse Editor Kip Dooley to reflect on a big year, and share the frameworks that her audiences have found most helpful for understanding data science and AI.

Kip Dooley: Renée, you’ve had a busy year of speaking engagements with everyone from local school districts to the World Economic Forum. What topics and areas of expertise have you been counseling people about?

Renée Cummings: So much is happening around AI and data science right now, and so quickly. The tools are rolling out with such frequency and ferocity that I have been called upon to discuss everything from generative AI to oversight, compliance, regulation, governance, and enforcement.

I always emphasize thinking about the “three R’s”: the risks, the rewards, and of course, the rights, whether talking about how we integrate AI into education systems, social systems, business — you name it. 

Data is like DNA. How it’s collected and used determines the kinds of algorithms we can design, which are more and more determining access to resources and opportunities.

In terms of your approach to data science and your professional journey, what do you think has prepared you to offer helpful advice at this moment?

I bring what I call an interdisciplinary imagination to data science. My work is not only in criminal justice, but also in psychology, trauma studies, disability rights, therapeutic jurisprudence, risk management, crisis communication, media, and more. My work is about future-proofing justice, fairness, and equity as we reimagine society, systems, and social relationships with AI. My work is also about public education, building awareness around the impact of data on society and democratizing data so we understand the power of data and how to use that power responsibly and wisely and in the interest of the public good as we design responsible AI.

I focus not only on doing good data science, but also on leveraging data science in ways that are equitable and ethical, using data science to build equitable and enduring legacies of success. How can we use this technology to ensure that groups and communities thrive? The goal is to build more equity into our systems, and in the ways in which we design data-driven solutions. It’s really about using data science to build more sustainable and resilient legacies.

 

“Legacy” is an interesting choice of word when talking about AI and data science. Tell me more about why you use that word in particular.

When we design an algorithm, we have the opportunity to use these tools of measurement as a means to enhance access, opportunity, and resources for communities — now, and for generations to come. Data is like DNA. How it’s collected and used determines the kinds of algorithms we can design, which are more and more determining access to resources and opportunities. 

Unfortunately, what we’ve seen for the most part are algorithms that deny legacies, that deny access to resources and opportunities for particular communities, because of issues like the lack of diversity in technology. What we find is that bias, discrimination, and systemic racism are amplified, and certain communities don’t get equal access to resources. What data does is shift power, particularly at the level of systems. Data is about power. Data doesn’t have the luxury of historical amnesia.

What are some examples that illustrate this idea, that data is about power?

We can start with the mere fact that most of the world’s data is owned by five companies. Those companies have created thriving legacies, billion-dollar legacies. They are the ones that governments need to negotiate with over tech regulation, compliance, and enforcement. Furthermore, they set the agenda for what we talk about. We’re all talking about generative AI now, and how it could change — or already is changing — the game, from industry to education. It’s all about power.

Looking at the mad rush to acquire data, from a criminal justice perspective, we’re starting to consider data brokers as cartels, traffickers, smugglers. Think of how companies scrape all kinds of data from the internet to feed large language models. This is creating new systems of power, placing a small group of individuals and companies at the helm of decision making around who has access to resources.

 

You’ve been studying these systems for a long time. How have the questions shifted with the sudden onset of generative AI and the explosion of generative AI applications? 

Generative AI is just a tool, and it can do us some good. But who has access to it? I just had a speaking engagement at a university in Kentucky where many students do not have internet access at home. So when we’re deploying technologies like generative AI, or we’re building smart cities, we’re only focusing on certain geographical spaces that have access to them. Is it going to widen the digital divide? The conversation happening in the U.S. and Europe about how to legislate generative AI is not engaging the Global South. 

We also have to ask whether or not it’s just a lot of hype, because we see the many contractual issues with adopting generative technologies into corporations, or the federal government, because of intellectual property rights. The [Federal Trade Commission] recently was very direct and instructive, talking about generative AI and deception and disinformation. 

I think that primarily it has amplified the questions we have been asking for a very long time, not created new questions. Although it does pose a new threat. And there’s just so much conversation, so much to keep track of, that a lot of people are overwhelmed at the moment. 

Professor Cummings at the World Economic Forum Global Technology Governance Retreat 2022 in San Francisco, June 20th - 23rd. © 2022 World Economic Forum

For the people who are overwhelmed, what are some things you try to help them reorient toward in order to make the problems and the questions feel a little bit more manageable or at least digestible?

I often say that AI is a new language, and it’s important to become literate in that language. There’s a certain level of digital proficiency we’ll need to be able to function in society as these technologies continue to spread.

It’s also important to understand that this is not a new technology, it’s nearly 67 years old. The improvements in machine learning and deep learning and neural networks have advanced within the past 10 years and have brought forth very successful iterations, but it’s not a new concept. 

These tools can assist you and bring an extraordinary amount of effectiveness or even excellence in the way you do your work. But there are challenges: accountability, transparency, explainability. We’re not able to truly audit these technologies. We’ve got to enter into this space with a certain amount of sobriety instead of being totally overwhelmed.

I often tell people to just breathe and to play with the tools. Use curiosity. Be curious enough to know about the technology and how to use it, with the knowledge that it’s changing the world around you. How is technology changing your world? This is the backdrop we can use to discuss the need for more regulation and governance in the tech space more broadly. 

 

In this environment, where it feels like the public has little or no say in how these technologies are designed and governed, what are the areas or levers that you see as promising areas of intervention?

It’s important to remember that we have a very solid community of AI ethicists and activists working in this space who have the capability and competency to design rigorous and robust guardrails. But a lot of people, the public, don’t understand that AI ethics, tech ethics, and  data ethics even exist and that we all have rights in this digital space. Many of the technologies being developed and deployed impact our civil rights, our civil liberties, our human rights.

When we bring rights to the fore, people wake up. When people understand there’s a technology making decisions about them, and they don’t have an opportunity to participate in those decisions, they start to think about what they can do and what they need to do. They start to think about the lack of agency and autonomy. We all have a right to privacy, to self-actualization, self-expression, self-determination. We also have the right to equal opportunities. These are hard-won rights that people usually are not so willing to give up. 

Again, that concept of the “three R’s” — risks, rewards, and rights — can bring us back to these key questions. 

What always wakes my students up is that concept of legacy: What is the legacy you are designing and deploying?

To what extent can algorithms create equity? Where do you see positive gains that have been made, or possibilities, for algorithmic systems to protect rights and create equity?

One area is government corruption and procurement. Through algorithms, you can account for every dollar and track fraud and corruption through government systems. Every dollar that is stolen through corruption is a dollar that taxpayers don’t have access to, that children don’t have access to. 

Algorithms can help us visualize data around human trafficking and migration, and crisis intervention in times of war and national emergencies. We’ve seen really solid work being done around natural disasters like hurricanes and volcanic eruptions. There’s been some research looking at the effects of earthquake aftershocks in Haiti — encoding buildings and visualizing where and how destruction could take place. 

One other area I can point to, given my background in criminology, is organizations like the Innocence Project looking at ways to deploy algorithms to find cases where there could be wrongful convictions, or records that should be expunged. At UVA, through my data activism residency, we’re developing a tool called the Digital Force Index, which will help people see how much surveillance technology is being deployed in their communities. 

Unfortunately in policing and the criminal legal/criminal justice system, tools like predictive policing have really not delivered on their promises. We hope that tools like the Digital Force Index will spur a more informed, community-led conversation around police budgets, the right to know how much is spent on surveillance tools, where in communities they are being deployed, and whether these tools are truly enhancing public safety or simply vanity projects. The Digital Force Index is the heart of public interest technology.

 

What questions or best practices would you like to see technology designers take on as part of their responsibilities?

What always wakes my students up is that concept of legacy: What is the legacy you are designing and deploying? That brings them back to their social responsibility. Data scientists, whether we’re working on services, systems, or products on behalf of the collective, we are designing futures. What is your legacy? What is the legacy of your family, your community, your generation? 

They start to think about questions around diversity and inclusion, equity, trauma, and justice. How are we traumatizing certain groups with technology? How can we bring a trauma-informed and justice-oriented approach to the ways we’re using data? We have to understand that different communities experience data differently. We don’t want to do data science in ways that will replicate past pain and trauma.

Data carries memory, a troublesome inheritance for particular communities. Those painful past decisions are trapped in the memory of the data, opening some deep social wounds as we attempt to use data to resolve very pressing social challenges and social questions. If we use historical data sets to build tools like large language models, which have been developed with toxic data scraped off the internet, what we risk doing is retraumatizing, revictimizing, groups that have tried so hard to find ways to heal. I’m always trying to get students to ask how we can use data to help communities heal, thrive, and build resilient and sustainable legacies. 

 

Meet MSI/Equity Fellow Sheetal Dhir

Equity Fellow Sheetal Dhir

Q&A with Sheetal Dhir, Equity/MSI Fellow

Institutionalizing PIT

March, 2023

Equity Fellow Sheetal Dhir

Sheetal Dhir is a senior strategist with over ten years experience in media, politics and advocacy and PIT-UN’s newest team member.

She sat down with our second-newest team member, Communications & Events Manager Kip Dooley, to discuss why she joined PIT-UN, what she’s learning from our members, and her professional superpowers.

Kip Dooley: Sheetal, my first question is one I always ask our members and partners: what’s your connection to PIT? How did you get into this space, and why do you care about these issues?

Sheetal Dhir: I think it’s three-fold. First and foremost, I was privileged enough to be on the launch team for PIT at New America as a consultant working closely with [now-Director] Andreen Soley. I got to read and write some of PIT-UN’s founding documents during the nascent stages, so the notion of public interest tech has been swirling around in my mind and marinating for years.

Secondly, in my last job at Color Of Change, we did very intensive work around how technology impacts communities of color. Part of how we tried to lead and push for change was by chewing on some of the big questions that public interest technology poses, like “who makes the decisions about how technology is designed and deployed? Who is not in those rooms, and what’s the downstream impact of that reality?” One of COC’s pillars is tech accountability, so I spent a lot of time thinking about bias within tech as well as policies like Section 230. 

My third connection is really about my personal relationship to technology. Once I joined the working world, my career completely changed twice because of the internet: first, the news cycle became 24-hour and dictated by ad revenue, and later, advocacy work became much more donor-driven. 

Since I left news media, nearly every project I worked on has involved tech and society in some way.

I started off as a news producer in the days of the Blackberry. Anytime my phone rang, I would flinch because it meant I could be stuck in an editing bay for the next three days. The fast transfer of information and the need to respond to it was astounding and transformational in terms of the balance between editorial and business. To me, it became less about context and more about getting the information out quickly. I don’t know if that’s been to our benefit or detriment, I guess it depends on the story or the thing you’re advocating for. Watching how the news cycle, and later the world of advocacy, were completely transformed by technology has left a lasting impression. 

On a basic level, a big thing that drives this work for me is just trying to figure out how I’m going to engage with this thing [holds up smartphone] for the rest of my life.

 

Kip: You’ve worked in broadcast media, strategic communications and advocacy. Which professional experiences prepared you best for this current role as Equity/MSI Fellow with PIT-UN?

Sheetal: Since I left news media, nearly every project I worked on has involved tech and society in some way. When I worked with the ACLU’s David Trone Center for Criminal Justice Reform, camera footage of police killings was starting to become easily shareable. We often got early footage of this video from our affiliates on the ground, and that was my introduction to the power of surveillance technology – either in service of the public good and social change, or in service of policing communities in a very militarized way.

Similarly, when I was at Amnesty International doing crisis work I saw how the Department of Homeland Security started using cell phone technology and public utilities to track migrants and process asylum applications. We were stuck with a really difficult question: is this technology actually empowering people? Are they actually getting on a path to citizenship because of this technology – or are they just becoming numbers on an app? Before this technology, there were citizenship officers talking to migrants and hearing their stories. There was certainly bias in many of those interactions, but now so many migrants just get sorted by an algorithm that no one can see or understand outside of the federal government.

At the ACLU, Amnesty, and Color Of Change, we had to grapple with questions about how the government was deploying technology. That was a big shifting of the lens for me: technology is everywhere, and is built into the fabric of how we administer the state.

Kip: These are massive questions.

Sheetal: Yes, and I have to be honest: I don’t think anyone has figured it out yet. When I was researching disinformation for the president of Color Of Change, it was clear that there are a ton of big questions and no silver bullets or straightforward answers. That’s another thing that inspired me to take on this role with PIT-UN. We’re housed in a think tank, so our job is to think through issues and communicate what we find; we’re also supporting a network of universities creating curriculum, pedagogy, research – and, importantly, a supply of skilled labor so that these issues don’t stay siloed in academia, but they filter into all sectors, both private and public. 

 

Kip: Speaking of our network of universities, you’ve been meeting with our minority-serving institutions (MSI’s) to get a sense of their needs and interests. What are you learning?

Sheetal: What I’m learning is that the boots-on-the-ground academics who are doing the work of PIT and pushing the boundaries of what’s possible are both incredibly passionate and wildly under-resourced. There is just too much to do and too little time. We’re still in a pandemic. Their ideas are amazing, it’s just that we need, like, six of them to get all the work done on each campus. I’m really interested in the possibility of creating a train-the-trainer model so that we don’t have to rely so heavily on the expertise and work of just a few people. We need to spread the wealth. 

What’s more, I think there are so many people doing PIT work, but they just don’t call it PIT yet. The more we can get our members to become ambassadors who engage in thought leadership and inspire their colleagues to join us, the better.

 

Kip: Ok, let’s end with a fun one: what are your professional superpowers?

Sheetal: You know what – I actually took an online quiz the other day that was spot-on. My first superpower is complexity-busting: cutting through layers of information to find the most important ideas. It really comes in handy when a team has a ton of research or ideas, but isn’t sure what it all means.

Kip: I’m even more excited to have you on our team now. 

Sheetal: The shadow side is that I can paint in really broad strokes, so I have to make sure I don’t leave out important details. Keep me honest!

Kip: I will!

Sheetal: The second superpower is I’m an empathizer. I pick up on the needs and emotions of people around me, and learn a person’s quirks even without them telling me. The quiz did say that empathizers sometimes channel others’ perspectives so easily that it can be difficult to develop their own point of view or opinions. I definitely do not have that problem.

Kip: Are there any topics folks should ask for your opinion about? Anything you love to discuss?

Sheetal: I’ve learned a lot about Ayurvedic cooking through my own nutrition journey. I’m by no means an expert, but I’ve learned a good deal and enjoy talking about it. I’m also very strong at early 90’s trivia, so go ahead and quiz me. And I love talking about my experience in community organizing. I’m pretty good at getting large groups of people to do complicated things in a short amount of time.

Kip: Wow. Working with PIT-UN seems like a great fit for you. 

Sheetal: Let’s just say that I’m excited to be here. 

Sheetal Dhir manages PIT-UN’s equity, inclusion and justice strategy while working with MSI members to develop and sustain their PIT programs. Her PIT interests include working with MSIs to ensure they have the resources they need to do the work they are excited about. Reach her if you’re looking for a thought partner to discuss how to best capitalize on your current research docket: dhir (at) newamerica.org.

Q&A with Dr. Cynthia Warrick, President of Stillman College

Dr. Cynthia Warrick, president of Stillman College

Q&A with Dr. Cynthia Warrick, President of Stillman College

Dr. Cynthia Warrick, president of Stillman College
Courtesy of Stillman College

Five years ago, Dr. Cynthia Warrick answered the call to lead Stillman College, a Historically Black College in Tuscaloosa, Ala. and member of PIT-UN. As its president, Warrick has navigated the college through rocky times financially and academically. Today, Stillman is thriving and its public interest technology programs continue to grow. Here, she reflects on her journey and hopes for Stillman’s future as she prepares to retire in June, 2023.

Q. Stillman College is one of the newest members of Public Interest Technology University Network. What are the values that drew Stillman to PIT-UN?

Stillman is very proud of our involvement in this network of over 50 academic institutions to ensure that all community members have the knowledge and skills to understand and access technology, which is playing a greater role in our daily lives.  As tools like AI-powered search engines come to the market, PIT-UN helps ensure that less educated, underserved, and communities of color will have a voice in the technology advances of today and tomorrow.

Q. As a liberal arts college, why is public interest technology important to the students and college?

When the public thinks about technology, the engineering field comes to mind.  But everyone is not going to be an engineer, or even think like an engineer. A liberal arts education prepares students to examine ideas from multiple vantage points, and to critically integrate problem-solving through multi-disciplinary collaboration across historical, social and cultural norms. Public interest technology helps students view technology’s impacts through a liberal arts lens.

Q. Stillman recently received a $2.7 million federal grant to build a cybersecurity IT training center. What does this mean for the college, students and the community?

Stillman College is located in the 3 poorest census tracts in Tuscaloosa; and is the largest employer and contiguous land-owner in West Tuscaloosa, which is 94 percent African American.  Under the Biden Administration Climate and Economic Justice Screening Tool, West Tuscaloosa is considered a Justice 40 community in several categories: Low Income, Higher education non-enrollment, Energy burden, Housing cost burden, Asthma, Diabetes, Heart disease, Low life expectancy, Low median income, Unemployment, and Poverty.  Converting Geneva Hall (a former dormitory built in 1954) into a Cybersecurity and Information Technology Training Center will address some of these socio-economic issues among the residents in West Tuscaloosa.  They will have access to state-of-the-art technology, education, and training enabling them for certificates, badges, and credentials, that will improve their quality of life.

Q. As the leader of a HBCU, what is your vision for the future of the college in advancing equity, racial and social justice?

HBCUs are institutions where students, faculty, and staff, can grow and develop without the burden of racism.  Stillman, like most HBCUs, has been leaders in the fight for equal rights and social justice; producing leaders yesterday and today. I pray that our commitment to equality and justice will continue as long as those ideals are needed in the nation and the world.

Q. As the first female president for the college, what do you think was the biggest challenge and opportunity looking back at the past five years?

The biggest challenge facing Stillman’s future was the $40 million dollar US Department of Education HBCU Capital Finance Loan that was taken out in 2012, and because of the COVID-19 pandemic, would probably never be paid back. I led a contingent of other HBCU presidents with these loans to petition legislators to forgive the loans which totaled $1.6 billion dollars, with $400 million dollars of debt forgiveness for HBCUs in Alabama.  

Q. Stillman College’s Cybersecurity DEI Clinics project is one of the 18 Network Challenge grantees this year. Can you describe the impetus behind the project and the impact it aims to achieve?

Stillman’s Cybersecurity diversity, equity, and inclusion clinics are based on the Citizen Clinic model developed at UC Berkeley’s Center for Long Term Cybersecurity.  Clinics will be scheduled with our HBCU Consortium members in major urban centers: Birmingham, Houston, Nashville, and Memphis.  These clinics will increase cyber risk awareness in underserved communities, introduce HBCU students to interdisciplinary cybersecurity training, and enhance also enhance awareness of ethical concerns coming out of the implementation of smart cities and smart homes.  

Q. One of the college’s key missions is building diverse talent pipelines for public interest technology and other fields where students of color are underrepresented. What is your strategy and approaches to advance these goals?  

Stillman College recognizes that cybersecurity and technology fields are a mainstay for the future.  In November 2017, we experienced a ransomware hack that crashed our entire ERP system, exposing the reality that there were not any professionals in Tuscaloosa with the expertise and knowledge to address this issue. In December 2017, we joined the Independent College Enterprise consortium of eight colleges of similar size to share technology infrastructure, housed at the University of Charleston in West Virginia. We then developed a cybersecurity concentration to ensure that we were educating students in fields that were relevant to the challenges of today. We received grant funding from the National Security Agency to support our academic and research development and are on the path toward a Center of Academic Excellence in Cyber Education. 

Congratulations to Dr. Warrick on her career and Stillman’s ongoing growth. To learn more about HBCUs in the Public Interest Technology University Network, visit our Member Directory and select the HBCU tag.

Q&A with Sylvester Johnson, Faculty Fellow, Public Interest Technology University Network

Q&A with Sylvester Johnson, Faculty Fellow, Public Interest Technology University Network

Photo credit: Ray Meese

Sylvester A. Johnson is  Associate Vice Provost for Public Interest Technology and Executive Director of the “Tech for Humanity” initiative advancing human-centered approaches to technology at Virginia Tech. He is the founding director of Virginia Tech’s Center for Humanities, which is supporting human-centered research across multiple disciplines. Sylvester’s research has examined religion, race, and empire in the Atlantic world; religion and sexuality; national security practices; and the impact of intelligent machines and human enhancement on human identity and race. He is a Professor in the Department of Religion and Culture. In addition to having co-facilitated a national working group on religion and US empire, he co-leads a project with Bill Ingram (Assistant Dean of Libraries at VT), supported by The Andrew W. Mellon Foundation, to develop ethically designed, public-interest Artificial Intelligence that can benefit public knowledge institutions in an innovation-driven society.

Sylvester is the author of The Myth of Ham in Nineteenth-Century American Christianity (Palgrave 2004), a study of race and religious hatred that won the American Academy of Religion’s Best First Book award; and African American Religions, 1500-2000 (Cambridge 2015), an award-winning interpretation of five centuries of democracy, colonialism, and freedom in the Atlantic world. Johnson has also co-edited The FBI and Religion: Faith and National Security Before and After 9/11 (University of California 2017) and Religion and US Empire (NYU Press 2022). He is a founding co-editor of the Journal of Africana Religions. Sylvester is writing a book on human identity in an age of intelligent machines and human-machine symbiosis. 

He currently leads “Future Humans, Human Futures” at Virginia Tech, a series of research institutes and symposia funded by the Henry Luce foundation that focus on technology, ethics, and religion. He is also directing the creation of a university-wide “Tech for Humanity” undergraduate minor, funded by The Andrew W. Mellon Foundation, to prepare future talent at the intersection of humanities, social justice, and technology.

Q. How has PIT-UN helped support your work in the field thus far?

I’m especially excited that the consortium provided funding for Virginia Tech to host a public interest technology summer speaker series in 2022 that successfully engaged hundreds of attendees on a digital platform and introduced them to foundational concepts and practices of PIT. This collaborative effort emerged because several PIT-UN institutions participated in creating content for the speaker series. Their topics ranged from smart cities to AI ethics to human-centered design. [LIST PARTICIPATING INSTITUTIONS]

The PIT consortium constitutes a major intervention in the effort to advance human-centered approaches and outcomes for technology and innovation. It has taken what felt disparate and ethereal in earlier stages and made it more concrete. At Virginia Tech, we launched a “Tech for Humanity” initiative in 2019 to elevate and further human-centered approaches to teaching, research, and public engagement related to technology. The following year we were accepted into PIT-UN, which immediately created a host of relationships and opportunities for collaboration with other academic institutions.

Q. How will this faculty fellow role allow the Network to achieve its immediate and future goals?

One objective of this faculty fellow role is to deepen and enhance opportunities for collaboration across the Network at a time the Network has enjoyed robust growth. Equally important is the opportunity horizon for translating PIT beyond academic domains. The consortium has achieved great success by cultivating a robust network of academic institutions that are creating new curricula (including one graduate certificate program in PIT) and new research, and are nurturing new talent at both the undergrad and graduate levels. We have invested in priming the talent pipeline pump. We now have the opportunity to build on that success to engage more directly beyond academic institutions. Governmental and civic organizations, as well as private industry, are essential to the larger ecosystem for making technology truly accountable to public interest and democratic outcomes. 

Q. What are your first priorities?

Shaping a more robust context for communicating PIT objectives and aims is a key priority. For instance, most of us have learned a lot about the power of digital platforms for communication within our own institutions during the early stages of the pandemic. So one priority will be exploring new digital media to enrich our ability for communicating and engaging across the PIT consortium. 

A second priority is supporting New America’s leadership as they activate capacity across the Network for engagement with private industry, governmental, and civic institutions.

Q. Where do you see the broader field of PIT in three years?

This is a great question. The public legibility of public interest as an urgent technology issue continues to grow from year to year. The reasons for this growing legibility are disturbing: Progress from innovations in technology has brought challenging and, at times, harmful consequences that affect our entire society. The next few years will create new opportunities for public interest technology to be implemented at a growing scale. This will add to our analysis of technology problems a greater capacity to harness and lead technology through human-centered approaches that are accountable to social justice and equity.

If we do our work well, we will see the PIT field maturing to the point of structuring new and different instruments and institutions for technology outcomes that advance justice and democracy through inclusive approaches that value diversity. I also think PIT will play a chief role in turning some critics of technology into agents for transforming and leading technology. We need more critical practitioners in multiple facets of society, and PIT will be central to this new trajectory.