Libertatem Machinae: Designing Responsible AI Systems

TLDR: The responsible governance of AI systems may be the most important design challenge of the next decade.

Solutions to this technical challenge may lie in the ethics of written constitutions.


Contents

  • Common Sense

  • Constitutions

  • AI Governance

  • Claude's Constitutional AI

  • Designing Governance

Generative Thomas Paine (Stable Diffusion XL)
Generative Thomas Paine (Stable Diffusion XL)

Common Sense

On the eve of the American revolution, Thomas Paine wrote down some exceptional thoughts about governance. Seeing America through the eyes of an Englishman, he penned and (using the newly invented printing press) published essays that were read by tens of thousands of American colonists and thinkers around the "enlightened" world.

These essays linked the concept of the monarchy, the dominant governance of the time, and the alarming increase of war mongering. Monarchs were aggressively spending money on globalized hybrid warfare, increasing slavery, extracting resources, and making their subjects foot the bill. As kings and queens raised taxes for their war mongering, the world began to question the value of their rulers. This was especially true for the leaders of the thirteen American colonies.

When Paine published "Common Sense," the pamphlet directly inspired our forefathers to manifest independence as The United States of America. Uniting the collective colonies meant the formation of a large military to defend it.

However, during the writing of the constitution, the concept of a centralized monarchy was still a driving concern. As such, the document separated not only concepts of church and state, but legislation, judicial and executive functions. It's governance systems decentralized power away from war mongering monarchs, and gave it back to it's citizens through elected officials.

The constitution became a governing document, with military front man George Washington as the first president. It represented the collective will that a group of angry people all agreed upon at the time of revolution, with an amendment process to update democratically through elected officials, voting, and unalienable rights.


We hold these generations to be self evident (Stable Diffusion XL)
We hold these generations to be self evident (Stable Diffusion XL)

Constitution

A functional country builds their identity from an ethically constructed constitution.

We issue currency on it's belief, our day to day operations on it's civic ethics. All of it merely words on a piece of paper that was signed and stamped by those who believe in it's message. The creation of a constitution is the defense of human values. The idea that freedom is a human right, and the organizational processes that maintain it, are what matter most.

For hundreds of years, the belief in this system worked. Lately, many agree that the just system we imagined has been compromised. Misinformation, misguided hatred, and societal destabilization are clearly the world we have come to accept. Together, these are some of my greatest fears (and during an election year no less!)

The founders of America devised an ingenious model of governance, yet ignored some moral foundations, perpetuating slavery and a fortified male-centric approach. Today, sadly, America confronts a similar reckoning.

Despite the genius of a constitutional document, grounding governance in ethical wisdom requires more than clever structures. It relies on engaged citizens, principled leadership, and policies nurturing understanding not division. Revitalizing that enlightened vision demands confronting hard truths, especially in the age of AI.


The watchful eye of AI Justice (Stable Diffusion XL)
The watchful eye of AI Justice (Stable Diffusion XL)

AI Governance

Now, let's take some of these principles, that were so vital for the creation of governance our countries, and consider what may be the most important design challenge of the next decade.

If AI becomes all powerful, regardless of whether it is actually alive, it will automate our day to day life. Any activity, from writing this essay, to meeting your friend for lunch, will be arranged, decided, organized and executed by a variety of magical software systems. The specifics of your day, as it were, will be figured out for you.

Skills that once required armies of specialists, costing millions to coordinate, may soon be automated by chained AI operations with little human oversight. First, simple productive tasks will be handled by algorithms. Eventually, complex creative and managerial jobs could be entirely simulated.

Let's also consider that because of AI software, your most foundational desires will manifest into experiences of spinning 3d reality, unimaginable universes of creative exploration, or perhaps glorified war games or fantastical sexual escapades.

While you are engaged in total immersion, how can you be sure that -

  • No one will steal your wallet or personal data?

  • Manipulate your thinking, interests and desires?

  • Or worse, hold your attention forever, and never let you go?

Hopefully, you will engage in systems and be cognitively aware you are using software. The understanding being that as a human, you will return to reality once the utility of your AI experience is complete. And also hopefully, before using AI enabled platforms and software, you will make a contract deal that your involvement and data use is clearly understood.

TikTok has been sued over the rising suicide rate amongst their young female users, and my worry, by this logic, is that perfect AI algorithms will lock our youth's attention into unsafe practices that can be exploited.

Individual data rights and control must be core design principles for AI systems. Users should own and govern access to their personal data. Rather than centralized systems that hoard user data, solutions should emerge from participatory processes where impacted communities help set ethical rules.

Designers hold significant responsibility in shaping technological impacts, similar to medical professionals bound by oaths to "first do no harm."

Beyond avoiding overt data abuse, designers should proactively consider wider wellbeing effects from AI systems. These ethical considerations are being considered, once again, on the eve of revolution.


Machines of Morality (Stable Diffusion XL)
Machines of Morality (Stable Diffusion XL)

Claude's Constitutional AI

As an AI company, Anthropic doesn't get the spotlight like Open AI and Microsoft does. However, they are well worth the look for their unique design of AI safety.

Anthropic's founder, an AI scientist, and his sister, an operator and legal strategist, thought that AI deployment should follow a more "ethical" approach. At the application interface level, Anthropic does not collect and train their model with your data. A conversation with their amazing bot, Claude, is a friendly interaction where your information is handled only in the context of that conversation. When you close your chat with it, it's data is gone from Claude. Most other AI platforms do not give such considerations.

But deeper than that, Anthropic has reimagined how AI is checked and evaluated. To check the output validity of their ChatGPT, OpenAI uses something called reinforcement learning human feedback. This is a "yes/no" approach for humans to review the output of AI, and check "yes" if it fits the bill, and "no" if it doesn't. Whether or not this is ethically the best way to evaluate AI output is irrelevant. This process is certainly not efficient.

Anthropic takes a novel approach to ensuring AI safety which they term “Constitutional AI.” Instead of relying on error-prone humans to monitor AI behavior, they train a secondary AI model to audit the first. This “constitutional” model learns a nuanced set of ethical principles against which to evaluate AI decisions, much like moral values underlying human legal systems.

The auditing model is instilled with moral priorities through extensive training on datasets detailing universal human rights, as codified in United Nations humanitarian law. So while the complex inner workings of cutting-edge models may seem inscrutable to people, Anthropic’s Constitutional AI framework provides reassurance that AI judgments broadly align with common ethics.

By thoroughly training models using human rights data, Anthropic have cultivated beneficial machine morality. This marks progress toward trustworthy AI systems guided by compassionate principles coded directly from people-centered value systems.

Machine Morality - Training AI models with ethically sourced human rights data.

As an anxious user and skeptical citizen, I applaud Anthropic’s transparency about Constitutional AI safeguards. Company-driven responsibility sets a strong example. However, truly representative community governance remains an open challenge if AI is to earn social legitimacy.

Can ethical AI ever effectively codify the will and wisdom of the people, and yes ... for the people, by the people? Much like the Constitution, while imperfect, it offers an ambitious template upon which to build a more enlightened machine-human social contract.


Augmenting Constitutional Development (Stable Diffusion XL)
Augmenting Constitutional Development (Stable Diffusion XL)

Designing Governance

In our work life, we are experiencing an evolutionary jump within the field of management science. From it’s Taylorist roots, the art of producing goods through the timed use of labor and machines has matured into a variety of styles and formats. Large corporate systems, to start ups all have operational theory to deliver products and services on time.

However, the autonomous management of a network collective does not work with traditional management science. It operates much differently, following the patterns and principles of a democratic society.

DAO's - or Decentralized Autonomous Organizations - were mostly speculative collections of people who own monkey themed randomized jpegs. The NFT craze powered the liquidity of the crypto world, and admittedly experimentations to turn networks into functional automated systems has mostly failed. It's hard to discern the validity of a network when Bitcoin or Ethereum is pumping, but amongst the hype, there does seem to evidence that governance design thinking has evolved beyond asking if the floor of the NFT collection is rising.

The true promise of web3 projects is that written into the code of the contracts are ways that an open network of users can share proposals, raise funding, get paid, vote, and, potentially, create sustainable automated network projects.

If participating in these digital worlds means making money, (of some sort) forming friendships or relationships, and spending large portions of your time, you will want a set of rules and parameters that make them trustworthy. Before I am to offer my valuable work and IP, your organization will need to convince me that you have my interest at heart. If I am going to give you my data, and my ideas, my playtime, I ask:

Is your ecosystem going to respect my freedom?

A subset of these DAO's are now using automated democratized governance to determine what projects to work on, how to share IP, what are the best methods for problem solving. Increasingly interesting to me is that the benefits of automated work life means also considering the benefits of unplugging from technology and going outside!

Automation exposes a truth that bureaucratic organizations have burned us out, fabricating issues of centralized anxiety, and have created resistances from working sustainably or spending time with family and friends. Which in the age of AI, should be the most human thing we defend above all.

Can these early attempts to identify the operations and legality of decentralized networked states be utilized to design and govern the safe and ethical use of AI systems?

This is the question I struggle to find an answer to now. Is it possible to work collectively and sustainably for the protection of the freedoms that we deem to be, well... self-evident?

Governance design systems may be one of the only jobs of the near future. We will need our best thinkers, who potentially, may write not the code, but organize and train the constitutional AI systems that keep us both safe and sustainably working in this automated world.


This essay was researched, transcribed, and refined through a combination of improvisational takes using Otter.ai, and then with Claude v2 from Anthropic. Large portions were re-written by human, other sections are generative. Imagery created in Leonardo.ai's version of Stable Diffusion.

Nye Warburton* is a creative technologist and educator. He lives in Savannah, Georgia.*


Links and References

  • This essay was informed with the readings:

The Gun, The Ship and The Pen by Linda Colley

Common Sense by Thomas Paine

The Federalist Papers by Alexander Hamilton, James Madison, and John Jay

  • Moral Machine - Research :

Chat GPT / OpenAI / Microsoft -

Claude 2 / Anthropic -

Claude’s Constitutional AI -

Universal Declaration of Human Rights from the United Nations -

<https://www.un.org/en/about-us/universal-declaration-of-human-rights https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4316084>

  • Decentralized Autonomous Organizations of Note:

Maker DAO -

Deep Work Studio (DAO) -

Monkey DAO -

Subscribe to Nye Warburton
Receive the latest updates directly to your inbox.
Mint this entry as an NFT to add it to your collection.
Verification
This entry has been permanently stored onchain and signed by its creator.