Maybe the real deepfakes were the lies we believed on the way
In 2014, a Tumblr post featuring an implausible anecdote involving a homeless man dancing to Gangnam Style and a miserly rich man being shamed in front of a crowd of applauding strangers was posted to Reddit, where it was roundly mocked. Over the years, Oppa Homeless Style, as it became known, became an infamous part of the ‘Tumblr canon,’ a widely-acknowledged example of the absurdity of a platform at the time famed for hosting naïve teens and ‘social justice warriors’ (a term used by people who think opposing social injustice and oppression is a bad thing, for some reason.)
But there was one small problem: there’s no evidence that Oppa Homeless Style was ever posted to Tumblr before it was ‘screenshotted’ and posted to Reddit. The obviously fake story was, itself, a fake.
If you’re interested in this particular example, I can recommend a comprehensive video essay by YouTube user SarahZ and the original research by Tumblr user heritageposts. But really, this is just that—an example of a wider problem when it comes to hoaxes and fakes on the Web.
At the beginning of this month, Chris Ume, the creator of a widely shared deepfake video appearing to show actor Tom Cruise (in fact actor Miles Fisher) spoke to Sky News. He said:
“So it’s important on my side to create awareness so people start thinking twice when they see similar videos.
“In a year from now people need to question what they’re looking at and it’s important for journalists to confirm their sources and where they got it.”
There has been a clamour from some people, with the rise of deepfakes, to propose an increase in watermarking, or the use of blockchains to allow people to ‘sign’ or ‘verify’ things that they say in public.
To my mind, this sounds too much like solutionising. While it may be wise to ‘watermark’ statements and videos, I’m yet to see anything that’s convincingly infallible. Signatures in some kind of register depend on that register being trustworthy, tamper-proof, and permanently available. Recent examples such as Verrit have not inspired confidence.
The true problem here is twofold:
- People in general are not equipped to think critically about whether or not they can trust a source;
- People in general are not equipped to understand the extent to which material can be doctored.
None of these are specific to the concept of deep-faked videos. If one wanted to produce a fake video of (e.g.) Tom Cruise before now, this could be achieved with a good enough combination of lookalikes, impressionists, and prosthetics. And even this may be overkill: if I want to pretend that Tom Cruise said something he didn’t, all I need to do is create something that looks like a public statement (for instance, on Twitter.) I don’t need neural networks or advanced AI to do this; I can use Microsoft Paint, or the developer tools in my web browser.
This is not an abstract problem. This has been happening for some time. In 2019, shortly before the last general election in the UK, a terror attack took place near London Bridge. In the aftermath, many people shared a screenshot of a tweet supposedly from (then-leader of the Labour Party) Jeremy Corbyn, which claimed the police had ‘murdered a man in broad daylight’ and sided with the attacker. Mr. Corbyn did not tweet this.
The fake tweet was swiftly debunked by many news sites—but this did not stop the screenshot being shared in WhatsApp groups, on Facebook, on Instagram, and elsewhere. Indeed, the photo is still up on several Facebook pages—although it is now covered by an interstitial stating it is ‘false information,’ you can still click through to see it. More worryingly, the comments on these Facebook pages (with titles such as ‘we love England’) include things such as this:
It’s fake news but he has said such similar things in the past and probably would have done again if it not an election, we are putting dangerous terrorists on the street when clearly still dangerous to the public due to smoke flakes…
It’s fake news, but it’s believable…
This is a problem that has been repeated throughout the world. And, as with many things, this is not something that can be solved by having people cryptographically sign every public statement they make, nor can it be solved by having public statements watermarked or stored in a register. People believing lies is not a problem that can be solved by technology.
The only thing I can suggest here is that education needs to drastically improve—for everyone, not just for children. And by ‘education’, I mean teaching critical thinking, and teaching of history. By ‘history,’ I don’t mean ‘memorising the dates of death of each of Henry VIII’s wives’—I mean ‘learning to read and analyse different sources, and determining where the truth might lie.’
There is probably some degree of education the tech industry can help with here. Consider, for instance, how many people don’t realise that it’s trivial to modify the content of web pages using developer tools. But at the same time, I’m not sure I trust ‘the tech industry’ in general (whatever that means) to participate in educating people about technology, given that most education technology seems to be glorified surveillance.