Have you ever seen Donald Trump singing „Take on me“? That video, which you recently may have seen on your social newsfeed, is way more than just a funny video. It is a so-called deepfake and a technology at the highest level. Those could prove to become a blessing for the meme world of generation Y, but also a serious curse for public information services. We talked to Nina Schick, author of “Deepfakes: The Coming Infocalypse” (the book is also available in German).
Ms. Schick, deepfakes have overcome the phase of being a childish application and will from now on strongly undermine the trust set in video-based journalism. Is it possible to also fake text and not “just” images?
Deepfakes are another name for ‘synthetic media’- media that is either manipulated by – or created entirely by Artificial Intelligence. Any form of digital media can now be ‘made by AI’ – audio, images, video, and yes, even text. This is a completely novel breakthrough that only started becoming possible due to the recent AI revolution in the last 5 years or so. So, in text format, that could include for example an AI algorithm that has been trained on the lifetime works of Shakespeare or Goethe, that can then ‘learn’ to generate text in the same style. So, a new work of Goethe created entirely by AI. In practical application, deepfake text will be used increasingly to automate things like news articles, for instance. The automation of ‘creative content’ is essentially how we can understand synthetic media – whether that ‘creativity’ is used for good or bad purposes is up to the creator.
deepfake technology – a revolution to come
Why will deepfakes be so important, which forces will use them and how will people be able to identify deepfakes?
Deepfakes and synthetic media are so important because they represent a revolution in the future of human commerce, perception and communication. While there is no doubt that deepfakes will be weaponized by bad actors, as AI makes it easier for anyone to make sophisticated digital content, they will become the new norm when it comes to all digital content creation. According to some estimates, over 90 percent of video content online will be AI generated by 2030. Aside from being used as disinformation, it will have many valid commercial applications too – transforming the future of every industry and human communication. What we are talking about here, after all, is the automation of all content creation by AI in a way that makes it accessible to everyone via interfaces like smartphone apps. As the line between authentic and synthetic content becomes blurred, it will become increasingly difficult to tell where the digital realm begins or ends. Companies like NVIDIA and many other legitimate synthetic media actors are developing this technology, because it of course is not only going to be used for ‘bad.
How can it be that a technology with the ability to undermine democracy, will be available for every smartphone user in a few years?
This technology is accelerating quickly and is already being wrapped up in easy-to-use interfaces like smartphones. Deepfake apps like Reface.AI have already become the number one downloaded app in the app stores in the United States and 20 other countries around the world. They are available because they are not all used for nefarious purposes – for instance, with Reface, the content that is made is largely for entertainment purposes, like memes. In the 18 months of its existence, users have already generated over three billion videos using Reface app. These apps are tremendously popular, and they will continue to become more sophisticated and widespread. Until now there are no ‘ethics or standards’ to govern the functionality of these apps, so while most of the Deepfakes content on apps is fairly benign at the moment, there is no doubt that as their functionality improves, they will be used to make more nefarious content too.
Is the validity of videos the only concern or will other forms of journalism and news suffer the same? Will there be a shift of trust to actual live coverage of events?
The fact that AI can make fake video that is just as convincing as authentic video is a huge leap forward because it has not been possible traditionally to make fake video effects without spending a lot of time and money. (Consider Hollywood special effects budgets for blockbuster films, for instance.) The fact that AI will democratize this for just about anyone has huge implications. However, within the broader context of our corroding information ecosystem where people don’t know what to trust anymore, with the adverse consequences for journalists and news, deepfakes can be seen as the latest evolving threat. We are already facing a massive crisis of trust as a consequence of our new digital information ecosystem – and the fact that now even video, which we have tended to see as an extension of our own perception – can convincingly be faked, will only serve to corrode trust in all authentic content even more. If everything can be faked, then everything can also be denied. This gives bad actors a lot more room to be held unaccountable, or to dismiss authentic events/media as fake.
Deepfake victims and their protection
How does one protect oneself as a private person being targeted by deepfakes and what can one do to detect fake news? Facebook is already developing/training an AI to recognize and identify deepfakes.
It is hard to protect oneself as a private person in this new information ecosystem. The alarming thing about malicious deepfakes is how they will be weaponized against normal people. Take deepfake pornography for instances. The majority of it is targeted against ‘normal’ women – including minors. The celebrities and politicians who are targeted also have more resources to start litigation or release their own version of events. This is not the case for ‘normal private citizens’ who are targeted by deepfakes. The best steps one can take is being aware of what is happening in our rapidly changing information ecosystem so that we can navigate it critically without becoming cynical. However, unfortunately, the reality still is that if you become the target of a malicious deepfake, there is still little recourse as there are still no legal precedents, example. Deepfake detection technology is one form of fighting back against deepfakes – like an anti-virus software, it offers a layer of protection. However, just like any anti-virus software, deepfake detectors can never detect all deepfakes and will constantly need to be updated. Another part of the solution is the idea of ‘media provenance.’ Rather than trying to detect everything that is fake, we set up systems and standards to authenticate and show a chain of provenance for all authentic media.
Can deepfakes be destructed and the originals be reconstructed?
Deepfakes cannot be destructed as such, but there will be increasingly sophisticated AI-technologies to detect synthetic content. The problem is that not all deepfakes or synthetic content is going to be malicious – so the context in which a deepfake appears will be very important to determining whether or not we want to detect it. IE, a detector that picks up billions of benign deepfake memes made on a smartphone app is not going to be very effective.
Do we as individuals have responsibilities regarding discrediting deepfakes?
Yes, we all have a responsibility to think about the ethics of synthetic media generation because deepfake creation is something that everyone is going to be thinking about, because everyone will be able to make this content soon. This future is coming sooner than expected. We are all going to become AI-generated content producer.