The Responsibility of Autonomy

Digital transactions have permitted fraud by design, but as artificial intelligence begins acting on our behalf within automated systems, the equation balancing fraud and growth is changing.

The Responsibility of Autonomy

Once upon a time, we used to recognize our customers. We extended trust, credit, and access to those we recognize.

Digital transactions have permitted fraud by design, but as artificial intelligence begins acting on our behalf within automated systems, the equation balancing fraud and growth is changing.

Synthesized Identity Fraud

Fraud implies impersonation - using assets or data as if someone was its rightful owner. Whether it’s using someone else’s credit card, a valet key (or spoofed CAN code) to get access to a vehicle, or a stolen ID badge to sneak into a building, the physical asset serves as a proxy for the identity that is entitled access.

As The Center for Humane Technology co-founders Tristan Harris and Aza Raskin have made clear in the The AI Dilemma - our ability to validate the authenticity of content to prove identity is quickly vanishing.

AIs, in the form of transformer based large language models (LLMs), have already become able to generate insightful and creative content in an increasingly wide array of mediums. The power of these algorithms lies in their ability to treat data as language.

Generating text that contains insightful knowledge has already begun to feel simplistic next to advances that promise that we'll soon be able to watch the dreams and internal monologue of others play on screens with the eerie beauty that is the aggregate of our cultural history collapsed into code. If the prior predictions of experts have been any indication, what has seemed science fiction will be possible in a radically shorter time than our most informed can predict.

Our words, our likeness, our voice, our perspective, our experiences, can all be replicated, impersonated, and predicted. We have entered the age of synthetic identity.

A New Class of Responsibility

Tristan and Aza present three rules for humane technology, the first being:

When you invent a new technology, you uncover a new class of responsibility.

They give compelling examples; before cheap digital storage and infinite memory, we didn’t need a right to be forgotten. Before cloud cameras and mass surveillance, we didn’t need a right to privacy.

Before AI, we didn’t need a right to our own identity.

Autonomy Without Supervision

There is a pattern here, first is the invention of a new technology. But the important second is its widespread deployment into a system that is unprepared to receive it. New technology is benign when first created. The first hard drive in a private network didn’t threaten our rights to control our personal information. The first camera images printed to paper didn’t threaten global privacy. Technology’s reach is dependent on the mechanism and means of its deployment.

Synthetic identity in and of itself isn’t a threat to our right to our own uniqueness. Yet AI’s deployment into increasingly autonomous systems without adequate supervision is akin to taking an incredibly precocious child and convincing ourselves that the best way to keep him from sticking his hand into the lion’s cage at the zoo is a stern talking-to beforehand.

Any zookeeper will tell you that the only thing that can dissuade the curious is a piece of plate glass.

The Responsibility of Autonomy

As our transactions have increasingly digitized and moved away from face to face interactions, we’ve lost the opportunity to build trust through familiarity. The people who were once mainstays in our lives have become interchangeable parts, quickly going the way of workers on assembly lines as automation replaces scores of workers with highly skilled technicians that witness the work at the remove of monitoring stations and productivity metrics.

Globalization, digital payments, and optimized fulfillment have created seemingly boundless economic growth, while continuously decreasing interactions with dwindling surface area through which to robustly recognize identity. When customers transact, whether in-store, online, or in the exchange of credentials for access to goods and services, we no longer have the ability to trace who is responsible back to a provable identity.

The Economics of Fraud Optimization

Digital transactions facilitate ease over distance at the expense of fraud. Since there is little identity surface area at the point of remote transaction we have two choices - put up barriers that slow things down and increase the surface area, or accept fraud as an operating expense. For financial transactions, this is easily quantified by measuring conversion drop-off with tightened identity verification against the cost of fraud. For a large majority of digital purchases, the optimal amount of fraud is non-zero.

However, the lower the margins on a purchase, the higher the cost of fraud. A transaction is any exchange of credential that grants access to not only goods but also physical space, equipment, or services. The cost of fraudulent access to a secure location such as a government building, or the potential liability from unauthorized use of equipment such as heavy machinery or weapons of mass destruction is so high that it tips the scales entirely by making fraud through impersonation incalculably expensive.

Authentication Surface Area

In this equation of fraud prevention, we either accept the cost of fraud in favor of convenience or we must choose to increase the surface area of identity verification to include domains that would be impossible to synthesize.

Authentication has long held to the holy trinity of something you know, something you have, and something you are. The gold standard for two-factor auth has become augmenting a known password with an SMS code to prove possession of a device. Yet even the knowledge of our innermost lives is being modeled, replicated, and will be deployed for profit.

If we can no longer trust any digital content we provide to prove what we know or who we are in the face of synthetic identity, we’re left solely with the devices we carry and the biometric identities of our bodies themselves.

Transacting On Our Behalf

To complicate matters, commerce is a communal affair. Rarely are we acting alone - we constantly vouch for others to act on our behalf.

Before digital payments, trust was easily transitive. If we both go to the same bar, and the barkeep sees that we’re (still) friends, it’s easy and highly secure for me to ask you to go put another round on my tab. A stranger walking in off the street who happened to know my name wouldn’t be recognized, and if they tried to use my credit might be turned out with a stern boot.

Digital payments make credit transferable as long as you don’t require identity. I could easily hand you a plastic credit card and ask you to buy us both lunch, the vendor has no way proving if I actually asked you to. Not only is there no way for the merchant to prove the physical presence of the identity the person holding the card claims to represent, the chain of responsibility that links each party is invisible.

Anonymity has become the price of doing business.

Towards a Chain of Responsibility

AI and autonomous systems are becoming increasingly able to act on our behalf, be it driving us in vehicles that use our credit to pay for their own charging, digital assistants that are able to preempt our needs and restock the pantry, or autonomous delivery systems ferry payloads ranging from Christmas wishlists to tactical warheads. We will endorse them to not only represent our identities, share our payment methods, and access our offices, homes, and schools, but we will also delegate their management to our employees, friends, and children.

Trust is an unspoken contract - we must develop a language we can speak with the machines that represent us. For who else will be responsible when they act on our behalf?