Skip to main content
|

A Vote Against AI Governance

An interview with Dr. Denisa Reshef Kera, exploring innovative strategies for AI regulation and the importance of democratic participation in shaping our AI-driven future

תמונה
Denisa Reshef

The meteoric advancement of artificial intelligence forces humanity to develop new approaches to AI governance. We spoke to Dr. Denisa Reshef Kera, a philosopher and designer experimenting with creative strategies of public engagement in emerging science and technology issues, on how society can shape and regulate AI development in a more democratic and participatory manner.

Q: Your work involves using AI agent-to-agent simulations for deliberation processes. Why is public engagement important in AI governance?

A: Services like ChatGPT are not just interfaces to AI systems, but finely tuned "agents" designed to assist users while resisting certain queries or issues. This subtle form of censorship, which is arbitrary and invisible, makes such emerging infrastructures dangerous to a free society. To reveal these system prompts, people try to "jailbreak" the AI with creative prompts, often through role-playing or far-fetched scenarios.

The role-playing involved in fine-tuning AI models makes the whole experience rather theatrical. That's why I claim that interaction with AI is less about "prompt engineering" or technocratic control and more about staging and performing scenarios. To understand how we can manage this uncertainty and learn how to regulate and live with AIs, it's more useful to look into theater and performance studies.

What AI models reveal is that our whole world is indeed a stage, where, as Rene Descartes famously put it, "larvatus prodeo" - we advance and interact by being "masked" and playing a role. This metaphor was revived in the 20th century by Erving Goffman, who described all our social interactions as driven by masks and roles that craft our self-presentation to negotiate various situations.

The real issue is not some lack of true agency or identity that reality, society, or technology have to conform to, but quite the opposite. The roles, masks, and performances let us experience radical freedom to establish our identity; they give temporary solutions and experiences that define our aspirations. This is how I view the political dimension of all theater and role-playing that I believe is becoming crucial for AI governance.

I see AIs as a "theatrical" infrastructure through which we experience various roles and stage conflicts and issues to maintain or contest the status quo in some domain. Rather than model and predict anything and gain certainty, I see them as tools to experience various scenarios, roles, perspectives, and then negotiate and deliberate upon solutions. This approach enables public engagement, political deliberation, and serves as a public sphere.

Q: Could you explain how AI agent-to-agent simulations work and how you use them?

A: We use AI agent-to-agent simulations as a hands-on exercise to make AI ethics and regulation issues more engaging. For example, we staged a mock trial based on a real incident where an AI-trained robotic hand in a chess competition broke a kid's finger. Since it was in 2023, We used GPT-3.5 to simulate interactions between different parties involved in this case, including AI developers, providers, the competition organizer, and the victim's family.

We explored how the EU AI Act and AI Liability Directive would be applied in this scenario. The "agents" (two lawyers and one judge) surprised us by identifying weaknesses in the regulation when applied to such a concrete case. This exercise helped us critically examine how regulations will be interpreted and applied in real-world situations, revealing blind spots and potential future failures in its application.

In another project, we involved members of a local hackerspace who were concerned about AI regulations prohibiting access to open models. We designed an agent representing the interests of open-source developers and activists, then let it rewrite problematic passages of the EU AI Act to submit feedback in ongoing negotiations.

Q: You've mentioned the concept of a "parallel public sphere" where AI agents represent us. How do you envision this playing out in real-world scenarios?

A: We have a workshop format for stakeholders who negotiate regulations or discuss pressing issues. Participants design AI agents representing their different views over prompts, and we then unleash these agents to have a discussion with each other. We follow this deliberation process to identify potential issues and dead ends.

So far, we've seen limits of the GPT-4 model, which is fine-tuned to avoid conflicts. It usually mellows the positions of our agents, who start organizing committees and try to outsource the discussion instead of really fighting over it. We plan to use other models like LLaMA and Claude to compare outcomes and get a better idea of what's useful for such simulations.

This approach allows us to test and refine regulatory frameworks in a controlled environment before they're applied in the real world. Moreover, by simulating various scenarios and conflicts, we can explore how AI can both support and challenge democratic values. It's an experimental approach to regulation and governance that is both engaging and fits recent calls for more experimental forms of governance - regulatory sandboxes.

Q: Your approach seems to challenge current obsessions with Artificial General Intelligence. What do you believe are the most pressing issues in AI ethics and regulation that aren't getting enough attention?

A: I'm surprised by the number of scholars preoccupied with "aligning" and "embedding" supposedly universal values into AI infrastructures. They believe that the right alignment will "tame" the technology and save humanity from a hypothetical existential threat posed by Artificial General Intelligence (AGI). I see this as a technocratic distraction, creating a simple boogeyman or new god to rule the masses.

My problem with AI or any technology is political - I worry about the monopolization and the unchecked concentration of power over AI by concrete corporate actors that promise governments they will be in charge of the new "aligned" infrastructure. Alignment with predefined, well-meant values and intentions of a few will not prevent the arbitrary exercise of power. Good governance is only possible in a free and open society that allows discussions and processes to challenge any decision.

Q: You've been invited by the European Commission's Joint Research Center to conduct a workshop. Can you tell us more about this workshop and its potential impact on EU policy?

A: As part of their 5th "Citizen Participation and Deliberative Democracy Festival," I was invited to conduct a workshop on using AI agents for deliberation. The festival brings together people interested in engaging citizens through deliberative and participatory formats in technology, science, and policymaking. Participants come from EU and national public institutions, politicians, civil society organizations, researchers, practitioners, artists, and ordinary citizens.

I'm also active as the alternate representative of the Czech Republic in international AI policy efforts. In this advisory role, I see a key challenge in implementing ethical AI policies as the close nature of most regulatory deliberations, which don't involve ordinary citizens, NGOs, or hackerspaces. I hope my format of agent simulations will offer a way to do this in an engaging and fun way.

With my collaborators, I'm proposing this as a form of an exploratory sandbox for AI policy, open to inputs from various stakeholders. It's a way to experience and understand the technology, realize how it serves different interests and agendas, and then negotiate it. By fostering public and community engagement in the early phases of the R&D process, we can establish more robust oversight mechanisms and offer a model for various communities to adopt the technology. 

This experimental and participatory approach is also the main topic of my recent book, which uses examples of blockchain technologies. There I track back the main challenge of good governance to the early anecdotes about sundials and water clocks, which illustrate how instruments shape us into what the Greeks and Romans described as "hungry parasites." The term, originally "para-sitos," referred to citizens who, having lost their sense of purpose and agency, have to follow the new clocks on when to eat. They become dependent on external sources for survival—waiting passively for a universal income or mere breadcrumbs from their master's tables. 

I hope that my approach reverses this and integrates public engagement and agency into the heart of technological development.