GAEIA: Building the Future of AI Ethics
Data Science & AI
May, 2023
Author: Søren Jørgensen is a Fellow at the Center for Human Rights and International Justice at Stanford University, and a co-founder of GAEIA. He founded the strategy firm ForestAvenue, which is based in Silicon Valley, Brussels, and Copenhagen, and previously served as the Consul General of Denmark for California.
Author: Elise St. John heads Academic Programs and Partnerships at California Polytechnic State University’s New Programs and Digital Transformation Hub, and is a co-founder of GAEIA. She builds and manages cross-disciplinary teams, and designs and leads research and innovation projects that utilize advanced technologies across sectors.
Since ChatGPT’s release in November 2022, public awareness of AI ethics and implications has exploded. As companies and lawmakers grasp for resources to meet this moment with clear and comprehensible strategies for weighing AI’s risks and rewards, what do we in the academy have to offer them?
In 2021, we (Søren Juul Jørgensen, Stanford, and Elise St. John, Cal Poly) launched the Global Alliance for Ethics and Impacts of Advanced Technologies (GAEIA), an interdisciplinary and multicultural collaboration to help companies and governments systematically consider the risks and benefits of AI. We’re excited to share with our PIT-UN colleagues some insights and resources from our journey with GAEIA, and possible directions for growth and expansion.
Each year, GAEIA convenes a cohort of international researchers to collaborate with industry experts to investigate new, pressing ethical considerations in technology use and to develop methodologies and training tools for weighing risks and benefits. Our work is guided by a few key principles:
Changing cultures and norms within industries and companies is just as important as developing strong oversight and regulation of the tech industry. Diversity of geography, culture, race/ethnicity, gender, and values is of paramount importance in creating our methodologies and training tools.
Interdisciplinary collaboration is key to our work and to the future of ethical technology development, deployment, and governance. Here is what these principles have looked like in action.
Culture Change
I (Søren Jørgensen) worked in and alongside tech startups during the “move fast and break things” era of Silicon Valley’s early 2010s. Having experienced firsthand how damaging this ethos could be, I moved into a fellowship at Stanford, doing research and advising companies on ethics considerations. A German insurance company CEO said something to me in one of my early conversations at Stanford that really stuck with me: “Please, no more guidelines!”
Of course we need guidelines, but his point was that guidelines without culture change are just another set of rules for corporate compliance. How do you develop a company culture where people care about and understand the risks of technology? Our hypothesis with GAEIA is that companies need simple, iterative processes for collaborative ethical assessment and learning.
Guidelines without culture change are just another set of rules for corporate compliance.
The first tool we developed is a simple template to iteratively assess the ethics of a technology by asking the kinds of questions that public interest technology prompts us to consider:
- What is the problem we’re trying to solve with this technology?
- How does the technology work, in simple terms?
- How is data being collected and/or used?
- Who is at risk, and who stands to gain?
- What is our business interest here?
- Is it fair? Is it right? Is it good?
- What action should we take, and how will we communicate our actions?
- How will we evaluate the impact and apply these insights?
To effectively pressure test this model, my colleague Elise St. John and I knew we needed a diverse, interdisciplinary global cohort of collaborators to mitigate against the kinds of bias and reductive thinking that cause so many tech-based harms in the first place.
The Need for Diversity
I (Elise St. John) joined Søren in 2021 to help organize and operationalize the first global network of collaborators, which would focus on the use of AI and advanced technologies in the financial sector. My background is in education policy research with a focus on issues of equity and the unintended outcomes of well-meaning policies. It lent itself quite well actually to the examination of unintended impacts of advanced technologies. At Cal Poly, I work in digital innovation and convene cross-disciplinary student groups to work on real-world public sector challenges through Cal Poly’s Digital Transformation (Dx)Hub.
When I reviewed the literature and explored the various academic groups studying tech ethics and the social impacts of financial technology at the time, it became apparent how very Western-centric this work was. Because public interest technology asks us to engage the voices and perspectives of those most exposed to and impacted by technological harms, we knew that the network we convened needed to be international and multicultural. This consideration is especially urgent vis-a-vis AI systems because they have the capacity to marginalize and silence entire populations and cultures, and to exacerbate existing inequalities, in totally automated and indiscernible ways.
Our first cohort consisted of over 50 M.A.- and Ph.D.-level researchers representing Africa, the Americas, Asia, and Europe. Using the DxHub model, we broke them up into five groups, each of which worked with an industry adviser to consider real-world ethical dilemmas that companies are facing, using the GAEIA template. In biweekly meetings, the scholars and industry advisers discussed both new and potential ethical dilemmas that fintech services and novel data sources, for example, might inadvertently create. The advisers also spanned geographical regions, further diversifying the ethical frameworks and industry perspectives brought to the conversation, and then we also came together in monthly inspiration sessions to meet with other leading thinkers on ethics, AI, and fintech.
Public interest technology asks us to engage the voices and perspectives of those most exposed to and impacted by technological harms.
The value of a truly global and diverse cohort was evident at several points. For example, one of the students introduced an ethical dilemma associated with “buy now/pay later” services. The consensus among many of the Western participants was that such services carry too much risk for users and are inherently prone to exploitation. A student from one of the African nations pushed back on this assessment, though, pointing out the opportunities that these systems could hold for the roughly 45% of people in sub-Saharan Africa who are unbanked. This opened up space for weighing the pros and cons of such a technology in different cultural and economic contexts, and it led to further conversations about the role of regulation vs. innovation, for example. These were very humbling and important moments, and they were exactly the kinds of conversations that need to become the norm in technology development, deployment, and governance.
We also had participants from Kenya, Brazil, and India, which are very exposed to climate disasters, develop a Global South working group. In our current cohort, students in Turkey and Ukraine who are living through natural disasters and war have also built connections and held separate meetings to explore how AI tools might provide swift and effective trauma relief in future crises.
Tech's Future Must Be Interdisciplinary
We intentionally recruited participants from across disciplines. Our two cohorts have featured M.A. and Ph.D. students from engineering, finance, law, philosophy, psychology, and more. Fundamentally, we want our students to be able to speak across many disciplinary languages. Technology is not just the domain of computer programmers. It is embedded in all aspects of society and the organizations where we work. Human resources managers have to understand how to communicate with engineers; product managers have to know enough about psychology to ask the right questions about enticement and deception; entrepreneurs need to be able to consult sociologists about the impacts of technologies on different communities. The list goes on.
We believe that an interdisciplinary approach is not a “nice to have” but a “need to have” for businesses going forward. There’s a growing understanding of the potential risks that businesses face when they don’t have robust ethical decision-making processes: high fines (especially in the European Union), reputational risk among consumers and investors, and the demand from current and prospective employees that companies do no harm and live out good values.
Having worked with hundreds of organizations during our careers, we can say with confidence that most of them don’t want to do bad things. They fundamentally want to understand risks and avoid them, which is why we’re designing the GAEIA resources and platform within the aspirational frameworks of learning and culture change, not corporate compliance. You can find good examples of how this approach has worked in the education sector. When educators are encouraged to develop genuine inquiry-oriented approaches to data use and systems change in response to accountability measures, they become invested in the accountability process and changing outcomes. Similarly, we want leaders and employees to be invested in ethical decision making, to set real metrics that not only ensure legal compliance but also lead to products and services that are profitable while at the same time aligning with the public interest.
What's Next for our Global Cohort
This work started as a project during the COVID-19 pandemic. At the outset, we didn’t know it would turn into a recurring cohort-based model and that we would further develop the model with the formation of GAEIA. In the first year, students were Zooming in from lockdown and quarantine and were sharing their diverse experiences as the waves of COVID-19 spanned the globe.
The project’s goal was to break down institutional and sector-specific silos, and bring together a cross-disciplinary, global group of scholars to develop a pipeline of leaders versed in the ethics of advanced technology use. We got that and so much more.
We are currently collaborating with people at the Center for Financial Access, Inclusion and Research at Tec de Monterrey (Mexico), who have expressed interest in forming a GAEIA chapter for undergraduates, and we are working now with Strathmore University Business School in Kenya on the development of a certification program. There is an emerging network not unlike PIT-UN that can help universities around the world build capacity and support for PIT research and curricula.
We should also mention the inherent value of building a tech ethicist community across cultures and geographies. The students independently set up social hours on Zoom that were structured around simple, fun aspects of culture like favorite foods and music. Students from China, Kenya, Germany, and the U.S. would show up on Zoom, whether it was 6 a.m. or 6 p.m. locally, with their favorite beverage. Getting to know more about each other’s lived realities, and bonding over simple human activities, even while far away, is the ground for understanding how AI and advanced technologies affect each of us in distinct ways.