Code for Charlottesville Teams up with Civil Rights Advocates

Code for Charlottesville volunteers present findings at the Data Justice Academy

Code for Charlottesville Teams Up
with Civil Rights Advocates

Data Science & AI 

May, 2023

Jonathan Kropko, professor of data science at the University of Virginia

Author: Jonathan Kropko is an Assistant Professor at the School of Data Science. His research interests include civic technology, remote environmental sensing, survival and time series analysis, and missing data imputation. He also leads Code for Charlottesville, the local chapter of Code for America that invites the community to volunteer on important issues.

The Problem

In the U.S., it is unconstitutional for someone to be tried multiple times for the same crime. So why then are people with criminal records punished again and again for past convictions — and even for past charges that did not result in conviction?

Anytime an individual charged with a crime appears in a district or circuit court, the charge creates a criminal record that can be found by the general public. In Virginia, these records can be accessed online in a matter of seconds, facilitating widespread criminal background checks in employment, housing, banking, and other decisions about whether to provide basic services. Schiavo (1969, p. 540) calls this practice “multiple social jeopardy” because although it is unconstitutional for a defendant to stand trial multiple times for the same charge, a person with a criminal record is punished by society over and over again through the withholding of basic services and opportunities. The result is a permanent underclass of people who are denied access to the resources and pathways they need to rebuild their livelihoods. 

A growing movement, led by legal aid societies such as the Legal Aid Justice Center in Charlottesville, Virginia, and nonprofit organizations such as Nolef Turns, advocates for these criminal records to be destroyed (through a process called criminal record expungement) or hidden from public view (what’s known as record sealing). Both expungement and record sealing have been shown to reduce recidivism, which is, ostensibly, an ultimate goal of the justice and corrections systems. 

Prior to 2021, only dismissals and cases of mistaken identity were eligible for criminal record sealing in Virginia. Even then, a qualifying individual had to complete a lengthy and costly petition process. Virginia enacted a law in 2021 that for the first time provided for automatic sealing of criminal records and extended eligibility for sealing to certain low-level convictions, such as possession of marijuana. The law goes into effect in 2025.

While the law represents real progress, it also comes with many restrictions and caveats: an individual can have no more than two records sealed over their lifetime; they must have no arrests or charges in the past three years; they must have no prior convictions; they must wait seven years with no additional convictions in order for the record to be sealed; and more.

All of which begs the question: How many people will actually qualify to have their records sealed once the law takes effect? Answering this question would help advocates decide where and how to focus their lobbying efforts, to ensure that the new law will in fact apply to the maximum number of people with records that deserve to be expunged or sealed.

The Project

Code for Charlottesville, a volunteer group of tech professionals and students that I lead, worked with the Legal Aid Justice Center and Nolef Turns to apply the tools of public interest technology to help answer this question. 

Our task was simple, but not easy: collect all public records from the Virginia district and circuit criminal courts between 2009 and 2020; anonymize the records; and then count the number of records that would qualify for automatic sealing or petition sealing. 

For any PIT project, it’s important to ask what data is available, how it was collected, and if there are any privacy concerns. 

Code for Charlottesville volunteers present findings at the Data Justice Academy
Code for Charlottesville volunteers present findings at the University of Virginia Data Justice Academy

We used bulk data scraped from the web by Ben Schoenfeld, a computer engineer and civic tech enthusiast. While the current Online Case Information System 2.0 bans web scraping, Ben collected the data from version 1.0 of the system, which had no such restriction, and replaced individual defendants’ names and dates of birth with an anonymized numeric ID. This allowed us to use the entirety of a defendant’s record without knowing the defendant’s identity. Because the data was anonymized, we were confident that the solutions we built would not cause further harm to the people in the database.

In total, the data contains more than 9 million individual court records and more than 3 million different defendants. Code for Charlottesville volunteers built a decision tree-based classifier to translate all of the restrictions in the law into logical conditions that can be evaluated quickly by a code compiler. This function takes in all of a person’s court records and outputs a list that identifies which of the records would qualify to be automatically sealed, which would be eligible to be sealed by petition, and which would be ineligible for sealing.

The Impact

According to our findings, more than 1.4 million records from 2009 to 2020 will immediately qualify for automatic record sealing once the law is implemented in 2025. More than 1 million additional records will become eligible if the individuals with these records avoid any convictions for the rest of a wait period. And 3 million more cases will, immediately or pending a wait period, be eligible for sealing by petition

We used our model to calculate how many more people would be eligible for record sealing if specific restrictions were loosened or removed. We even broke these counts down to the level of the Virginia House of Delegates or Senate district so that the Legal Aid Justice Center could show a delegate or senator the results for their district, making the impact directly visible to the decision makers.

The LAJC used our results in discussions with the Virginia House and Senate to advocate for specific changes to the 2021 law that would expand record sealing access to even more people. This project demonstrates how public interest technology — even when the group of workers is small — can provide right-sized tech tools that support democracy and advance justice.

GAEIA: A Global Collaboration to Grow Tech Ethics

Cal Poly's Digital Transformation hub

GAEIA: Building the Future of AI Ethics

Data Science & AI

May, 2023

Soren Jorgensen, cofounder of the Global Alliance for Ethics and Impacts of Advanced Technologies

Author: Søren Jørgensen is a Fellow at the Center for Human Rights and International Justice at Stanford University, and a co-founder of GAEIA. He founded the strategy firm ForestAvenue, which is based in Silicon Valley, Brussels, and Copenhagen, and previously served as the Consul General of Denmark for California.

Elise St. John, co-founder of the Global Alliance for Ethics and Impacts of Advanced technologies

Author: Elise St. John heads Academic Programs and Partnerships at California Polytechnic State University’s New Programs and Digital Transformation Hub, and is a co-founder of GAEIA. She builds and manages cross-disciplinary teams, and designs and leads research and innovation projects that utilize advanced technologies across sectors.

Since ChatGPT’s release in November 2022, public awareness of AI ethics and implications has exploded. As companies and lawmakers grasp for resources to meet this moment with clear and comprehensible strategies for weighing AI’s risks and rewards, what do we in the academy have to offer them?

In 2021, we (Søren Juul Jørgensen, Stanford, and Elise St. John, Cal Poly) launched the Global Alliance for Ethics and Impacts of Advanced Technologies (GAEIA), an interdisciplinary and multicultural collaboration to help companies and governments systematically consider the risks and benefits of AI. We’re excited to share with our PIT-UN colleagues some insights and resources from our journey with GAEIA, and possible directions for growth and expansion.

Each year, GAEIA convenes a cohort of international researchers to collaborate with industry experts to investigate new, pressing ethical considerations in technology use and to develop methodologies and training tools for weighing risks and benefits. Our work is guided by a few key principles:

Changing cultures and norms within industries and companies is just as important as developing strong oversight and regulation of the tech industry. Diversity of geography, culture, race/ethnicity, gender, and values is of paramount importance in creating our methodologies and training tools.

Interdisciplinary collaboration is key to our work and to the future of ethical technology development, deployment, and governance. Here is what these principles have looked like in action.

 

Culture Change

I (Søren Jørgensen) worked in and alongside tech startups during the “move fast and break things” era of Silicon Valley’s early 2010s. Having experienced firsthand how damaging this ethos could be, I moved into a fellowship at Stanford, doing research and advising companies on ethics considerations. A German insurance company CEO said something to me in one of my early conversations at Stanford that really stuck with me: “Please, no more guidelines!”

Of course we need guidelines, but his point was that guidelines without culture change are just another set of rules for corporate compliance. How do you develop a company culture where people care about and understand the risks of technology? Our hypothesis with GAEIA is that companies need simple, iterative processes for collaborative ethical assessment and learning. 

Guidelines without culture change are just another set of rules for corporate compliance.

The first tool we developed is a simple template to iteratively assess the ethics of a technology by asking the kinds of questions that public interest technology prompts us to consider:

  • What is the problem we’re trying to solve with this technology?
  • How does the technology work, in simple terms?
  • How is data being collected and/or used?
  • Who is at risk, and who stands to gain?
  • What is our business interest here?
  • Is it fair? Is it right? Is it good?
  • What action should we take, and how will we communicate our actions?
  • How will we evaluate the impact and apply these insights?

To effectively pressure test this model, my colleague Elise St. John and I knew we needed a diverse, interdisciplinary global cohort of collaborators to mitigate against the kinds of bias and reductive thinking that cause so many tech-based harms in the first place.

The Need for Diversity

I (Elise St. John) joined Søren in 2021 to help organize and operationalize the first global network of collaborators, which would focus on the use of AI and advanced technologies in the financial sector. My background is in education policy research with a focus on issues of equity and the unintended outcomes of well-meaning policies. It lent itself quite well actually to the examination of unintended impacts of advanced technologies. At Cal Poly, I work in digital innovation and convene cross-disciplinary student groups to work on real-world public sector challenges through Cal Poly’s Digital Transformation (Dx)Hub

Images courtesy of Cal Poly

Cal Poly's Digital Transformation hub

When I reviewed the literature and explored the various academic groups studying tech ethics and the social impacts of financial technology at the time, it became apparent how very Western-centric this work was. Because public interest technology asks us to engage the voices and perspectives of those most exposed to and impacted by technological harms, we knew that the network we convened needed to be international and multicultural. This consideration is especially urgent vis-a-vis AI systems because they have the capacity to marginalize and silence entire populations and cultures, and to exacerbate existing inequalities, in totally automated and indiscernible ways. 

Our first cohort consisted of over 50 M.A.- and Ph.D.-level researchers representing Africa, the Americas, Asia, and Europe. Using the DxHub model, we broke them up into five groups, each of which worked with an industry adviser to consider real-world ethical dilemmas that companies are facing, using the GAEIA template. In biweekly meetings, the scholars and industry advisers discussed both new and potential ethical dilemmas that fintech services and novel data sources, for example, might inadvertently create. The advisers also spanned geographical regions, further diversifying the ethical frameworks and industry perspectives brought to the conversation, and then we also came together in monthly inspiration sessions to meet with other leading thinkers on ethics, AI, and fintech. 

Public interest technology asks us to engage the voices and perspectives of those most exposed to and impacted by technological harms.

The value of a truly global and diverse cohort was evident at several points. For example, one of the students introduced an ethical dilemma associated with “buy now/pay later” services. The consensus among many of the Western participants was that such services carry too much risk for users and are inherently prone to exploitation. A student from one of the African nations pushed back on this assessment, though, pointing out the opportunities that these systems could hold for the roughly 45% of people in sub-Saharan Africa who are unbanked. This opened up space for weighing the pros and cons of such a technology in different cultural and economic contexts, and it led to further conversations about the role of regulation vs. innovation, for example. These were very humbling and important moments, and they were exactly the kinds of conversations that need to become the norm in technology development, deployment, and governance.

We also had participants from Kenya, Brazil, and India, which are very exposed to climate disasters, develop a Global South working group. In our current cohort, students in Turkey and Ukraine who are living through natural disasters and war have also built connections and held separate meetings to explore how AI tools might provide swift and effective trauma relief in future crises.

Tech's Future Must Be Interdisciplinary

We intentionally recruited participants from across disciplines. Our two cohorts have featured M.A. and Ph.D. students from engineering, finance, law, philosophy, psychology, and more. Fundamentally, we want our students to be able to speak across many disciplinary languages. Technology is not just the domain of computer programmers. It is embedded in all aspects of society and the organizations where we work. Human resources managers have to understand how to communicate with engineers; product managers have to know enough about psychology to ask the right questions about enticement and deception; entrepreneurs need to be able to consult sociologists about the impacts of technologies on different communities. The list goes on. 

We believe that an interdisciplinary approach is not a “nice to have” but a “need to have” for businesses going forward. There’s a growing understanding of the potential risks that businesses face when they don’t have robust ethical decision-making processes: high fines (especially in the European Union), reputational risk among consumers and investors, and the demand from current and prospective employees that companies do no harm and live out good values. 

Having worked with hundreds of organizations during our careers, we can say with confidence that most of them don’t want to do bad things. They fundamentally want to understand risks and avoid them, which is why we’re designing the GAEIA resources and platform within the aspirational frameworks of learning and culture change, not corporate compliance. You can find good examples of how this approach has worked in the education sector. When educators are encouraged to develop genuine inquiry-oriented approaches to data use and systems change in response to accountability measures, they become invested in the accountability process and changing outcomes. Similarly, we want leaders and employees to be invested in ethical decision making, to set real metrics that not only ensure legal compliance but also lead to products and services that are profitable while at the same time aligning with the public interest.

What's Next for our Global Cohort

This work started as a project during the COVID-19 pandemic. At the outset, we didn’t know it would turn into a recurring cohort-based model and that we would further develop the model with the formation of GAEIA. In the first year, students were Zooming in from lockdown and quarantine and were sharing their diverse experiences as the waves of COVID-19 spanned the globe. 

The project’s goal was to break down institutional and sector-specific silos, and bring together a cross-disciplinary, global group of scholars to develop a pipeline of leaders versed in the ethics of advanced technology use. We got that and so much more. 

We are currently collaborating with people at the Center for Financial Access, Inclusion and Research at Tec de Monterrey (Mexico), who have expressed interest in forming a GAEIA chapter for undergraduates, and we are working now with Strathmore University Business School in Kenya on the development of a certification program. There is an emerging network not unlike PIT-UN that can help universities around the world build capacity and support for PIT research and curricula. 

We should also mention the inherent value of building a tech ethicist community across cultures and geographies. The students independently set up social hours on Zoom that were  structured around simple, fun aspects of culture like favorite foods and music. Students from China, Kenya, Germany, and the U.S. would show up on Zoom, whether it was 6 a.m. or 6 p.m. locally, with their favorite beverage. Getting to know more about each other’s lived realities, and bonding over simple human activities, even while far away, is the ground for understanding how AI and advanced technologies affect each of us in distinct ways.