Loading Now

Design systems that help people identify what’s authentic instead of labelling all that’s fake

Design systems that help people identify what’s authentic instead of labelling all that’s fake

Design systems that help people identify what’s authentic instead of labelling all that’s fake


Earlier this month, OpenAI rolled out Sora, a short-form video app that was its first foray into social media. While the last thing we need is yet another algorithmically-curated, endless scroll of videos, Sora is different from its predecessors in that everything in its feed is fake—created entirely using artificial intelligence (AI).

Within days of its launch, the internet was filled with reels of famous (sometimes long-deceased) people in impossible situations—winning a Nobel Prize, stealing GPUs from Target or being escorted off a plane for trying to smuggle a baby kangaroo. While many of these videos were obviously fake, others seemed disconcertingly real.

AI has gotten to the point where it is capable of excelling at just about any form of creative endeavour. I have personally used it to generate images so realistic that they are impossible to distinguish from photographs.

Specialized voice-cloning technology can produce audio footage in the voice of just about anyone on the planet using nothing more than a short recording of their voice.

And it has become so trivially simple to create beautiful, layered musical compositions in any genre that it feels like all that stands between me and rock stardom is a well-crafted prompt.

As much as this radical democratization of talent has been a boon for the less gifted (like me), it has resulted in a real crisis of truth. For each truly creative piece of content generated by AI, hundreds are being designed to deceive, mislead and confuse. And as AI improves and gets more believable, we are slowly sinking into a vast ocean of artificially generated content that is making it harder and harder to tell what’s real and what is not.

Governments around the world are struggling to come to grips with this problem. Deep fake videos are being used to defame the famous and mislead the innocent. Nudify apps are being used to create scandalous images and false narratives. Cloned voices of loved ones seemingly calling for help in an emergency are being used in phone scams to part the gullible from their cash.

Not to mention the truly sophisticated scams that artfully weave AI-generated content in all forms with real-life scam artistry to create the perfect con.

Policymakers seem to be coalescing around the notion that what we need to do is insist that all AI-generated content should be watermarked. Earlier this month, California’s Governor Gavin Newsom signed into effect a law that requires AI-generated content to carry provenance data—precise information of the specific app and version used to create it.

More recently, India’s minister for information technology, Ashwini Vaishnaw, suggested that India would also soon have regulations in place to deal with this problem—along with appropriate “technolegal” measures to enforce it.

The trouble is that watermarks alone are unlikely to be effective for what we want to achieve. Model-side marks are easy to evade—any reasonably skilled digital artist could just edit them out or make them such that no one who is not actively looking for them will spot them.

Then there is the question of compliance. While I have no doubt that the big AI labs will immediately implement these new requirements, there are already dozens of alternatives that provide similar functionality but operate under far fewer constraints. All that a law like this would do is encourage those who use these tools for ill intent to move their operations to the margins.

Our focus should not be on identifying what is false, but instead on making it easy to identify what is real. We need to find a way to ensure that anyone who wants to do so can easily identify whether an image or piece of video footage has been captured directly by a camera or not. We need them to be able to distinguish digital art created by a stylus on a tablet from what’s conjured up with a prompt. And to be able to say with certainty that the words on this page were, in fact, written by me and not an AI agent trained for that purpose.

In a world where so much is false, we need to be able to tell what is true.

The Coalition for Content Provenance and Authenticity (C2PA) has created an open technical standard that cameras, applications and editors can use to attach a signed ‘provenance manifest’ to the content that they produce. This digital certificate is embedded directly in the metadata of the digital artefact, indicating exactly who created it and how, as well as what edits were made and by whom. These ‘Content Credentials’ allow users to verify the signature and edit history, so that they can ascertain for themselves how it came into being.

While credentials can be stripped out of the metadata, in time, the very absence of credentials will come to signify deceit. I can see content credentials becoming a sign that everyone looks for before choosing what to consume.

When that happens, digital tool-makers will ensure that this is incorporated into their products so that creators can easily certify the origin of what they produce. Distributors and publishers will make sure that content that is made available to the public carries an auditable record of all the edits that were made to the raw footage, so that consumers can decide for themselves what edits are acceptable and how far they are willing to stray from the original.

We are already at the point where we need to disbelieve much of what we are asked to consume. This is not the time to label everything that is not true, but instead to identify what little is.

The author is a partner at Trilegal and the author of ‘The Third Way: India’s Revolutionary Approach to Data Governance’. His X handle is @matthan.

Post Comment