Humans seem innately wired to detect even the slightest imperfections in human likeness, a phenomenon known as the ‘uncanny valley.’ Simply put, we trust our own perception and intuition to guide us in deciphering what is real and what is not.
This is why early deepfakes—glitchy and riddled with errors like mismatched lip-syncs or jerky movements—were easily dismissed as harmless novelties.
Much has changed since.
The use of deepfakes is not always benign
Today, advances in synthetic media, especially generative AI, have made it cheaper, easier, and faster to create or manipulate digital content. For as little as $1.33, anyone can create a convincing deepfake.
Pair that with the speed, reach, and sheer scale of social media, and the stage is set for bad actors to exploit these tools to spread misinformation, disinformation, and malinformation.
Take British engineering giant Arup, which was scammed out of $25 million after fraudsters used a digitally cloned video of a senior manager to authorise a financial transfer in Hong Kong. Or “Anne,” a woman in France who lost her entire life savings—$855,000—to a romance scam involving an AI-generated Brad Pitt.
We may well have crossed the ‘uncanny valley’, where seeing is no longer believing
In such a world of ‘counterfeit people’, as philosopher Daniel Dennett put it, who can we trust online? Some warn that we are heading towards a future where shared reality no longer exists, and societal confusion runs rampant over which information sources are reliable.
So, how do we uphold the integrity of digital content? And are we adequately equipped to confront the rise of malicious AI-generated fakes?
Regulations, detection tools, and other approaches have been introduced:
1. Regulation
The EU’s Digital Services Act (DSA) introduces measures to combat malicious content, designating certain entities as ‘trusted flaggers’. These flaggers are responsible for identifying potentially illegal content and notifying online platforms. Once flagged, platforms must act swiftly to remove any objectively unlawful material.
In the US, several states have enacted laws targeting the misuse of deepfakes, particularly in cases of non-consensual pornography and election interference.
China has implemented strict rules requiring deepfake content to be clearly labelled, such that users can differentiate between real and synthetic media.
Meanwhile, Singapore has banned the use of deepfakes during election periods to prevent interference and manipulation of public opinion.
2. Context-based assessment of synthetic media content
A context-based approach involves evaluating deepfakes within the broader context of how and why they are being used. This framework helps regulators, platforms, and fact-checkers identify and address the most urgent threats, allocating resources effectively.