Vragen? Bel 06 – 160 749 53 of mail mij info@jarnoduursma.nl

What is deepfake?

Deepfakes are images, sounds and texts created by artificial intelligent software. Intelligent software is used to create or manipulate this content. The best-known deepfakes are those in which the face of one person is exchanged for that of another. In this article, you will read all about deepfakes. The article will focus on how deepfakes are made, it will provide examples of deepfakes, and it will highlight the risks associated with deepfakes. 

 

Would you like to go in depth right away? Then download my report on Deepfake technology.

Image manipulation

Image manipulation is not reserved for our digital age. There are many examples in world history where images were manipulated. For example, the American president Lincoln had an engraving made. It showed his head on the body of John C. Calhoun; the vice president of the US in the first half of the nineteenth century. Reportedly, this was to make President Lincoln’s appearance more ‘presidential’.

In 1990, Adobe released the program Photoshop. Since then, ‘to photoshop’ has become a verb. Nowadays, almost everyone owns a smartphone with a built-in high-quality photo camera, including all available photo filters. This makes it more likely that you will come across a manipulated photo online rather than the original.

 

Until recently, edited realistic videos were reserved for Hollywood studios, but now they are within reach for everyone. This is what makes deepfake such a prominent development. As a result, you have been reading about deepfake technology more and more frequently in news reports in recent months. What is the definition of deepfakes?

 

What are deepfakes? 

So, deepfakes are texts, images, videos and audio edits created by artificially intelligent software. The term Deepfake combines the words ‘deep’ and ‘fake’, where ‘deep’ refers to artificially intelligent deep learning networks. Deepfake content can cause a lot of hilarity. For example, when comedian Bill Hader does an impersonation of Arnold Schwarzenegger. Or when Kim Kardashian talks in a video about how she is manipulating her followers for commercial purposes.

 

“Deepfake refers to fake information that can be created by modern artificial intelligence (AI) software. These AI systems create new digital content such as faces, images, videos, texts, human voices and other audio recordings. New digital content that looks familiar to us, but which is in fact completely newborn. It’s fake.” Jarno Duursma

 

Deepfake technology also poses a threat. The application can be used in many ways, manipulating opinions, blackmailing people or damaging reputations. We are entering an online era where we can no longer trust our eyes and ears. I wrote an opinion article about this in the Dutch newspaper Volkskrant, among other places. (And a Volkskrant interview in December 2020)

 

Why do we hear so much about these deepfake videos? How is it that this development is happening so fast? Why is the number of deepfake videos increasing exponentially? The answer is relatively simple. All the signs are in place for this development to accelerate beyond measure. The videos are relatively easy to create (with smartphone apps, for example), easy to distribute (via social media and WhatsApp) and there is enough of an audience willing to share crazy, high-profile, juicy videos.

Wat klanten zeggen

"For my Master's thesis in Law I had the opportunity to interview Jarno Duursma. He has a lot of knowledge about the latest technological developments and knows how to convey this knowledge with great enthusiasm. He has given me both insights and tips that have been very useful!"
Susanne Bijvank
Master student Intellectual Property Law
Susanne Bijvank
"Jarno Duursma spoke at Hewlett Packard Enterprise's annual customer day. He took us on an inspiring journey through trends like AI and deepfake and the impact these trends have on society, now and in the future. Stimulating, energetic and plenty of interaction."
Rick Glas
Hewlett Packard Enterprise
Rick Glas
"Jarno gave an update on the latest technoligy trends. Within short time we were up to speed on topics such as artificial intelligence, conversational commerce, et cetera. In his own - sometimes humorous - way you get inspired, so you can continue to explore your interests. As far as I'm concerned, it was a lecture that surpassed the rest of the day in terms of content. "
Eds Keizer
Co Founder Web agency GEK
Eds Keizer

Image manipulation

Deepfake technology takes the concept of fake information to a next level. Information is abundantly available digitally, and deepfake software is increasingly easy to use (now also via smartphone apps). Video, audio and texts are easy to manipulate. What was previously exclusively made in Hollywood is now within everyone’s reach. Therefore, we should quickly recognize what is deepfake. Especially now that we are sharing (fake) information with each other worldwide at lightning speed.

 

Generative Adversarial Networks

How are deepfakes created? Usually by making use of general adversarial networks.

Over the past few decades, the qualitative development of artificial intelligence has been meteoric. Generative Adversarial Networks (or GANs) are a deepfake technique that is rapidly emerging. A GAN often consists of two networks. The first neural network, the generator, creates new digital content. The other system, the discriminator, determines whether this new information looks real. They work together in mutual competition. In this scenario, the two systems push each other to great heights. The information that the discriminator finally approves is sometimes indistinguishable from the real thing. The dividing line between real and fake becomes ever more blurred. The generator continues to create content until the discriminator decides: this is good, stop now. Then it is indistinguishable from the real thing.

Types of Deepfake

 

  1. Image

Of all the forms of generative deepfake software products, deepfake imagery is the one that gets the most attention. For example, generative AI software can swap faces in videos (face swap) or turn an average decent picture into a deepfake nude photo (Deepnude). As technology develops, it becomes increasingly difficult to distinguish fake from real. Far-reaching abuse lurks.

 

  1. Voice

Not only deepfake images can be fake. Consider sound too, such as music and voice. Recently, financial fraud was committed using a cloned director’s voice. This was used to successfully commission a transaction with a value of approximately €220,000.

 

Voice Cloning, the act of mimicking someone’s voice, is becoming more and more credible.

In 2020, a new Google software product was released which took only 5 seconds to clone a voice. Startup Lovo also put fine voice cloning software on the market. Voice cloning is unstoppable. 

I also created a voice clone of myself and linked it to my avatar video. Check out the video here. 

Applications of voice cloning: Spotify advertising, for example, can be quickly personalized when an advertising voice uses your name. Newsreaders can read out the news 24 hours a day, as long as they are fed new texts. You can clone the voice of your loved ones so you can talk to them through your smart speaker even after they died. And it will be far easier to publish an audiobook in addition to a regular book.

Bad scenario: When this works flawlessly, you can have other people say all sorts of things you want (but which they would never approve of)  in an audio clip. Libel, reputation damage, blackmail, identity fraud.

As an app on your smartphone in a few years from now?

 

  1. Text

Generative AI systems can also handle text. Digital text is abundantly available. From online newspaper archives to entire libraries full of e-books. These are used to train systems. A system does not need to understand the text to learn and generate new texts: Deepfake texts, generated by software. In the current year 2021, most texts are not yet perfect, certainly not the longer ones. But the technology is improving all the time. 

 

One example is GPT-3.

GPT-3 is an AI system that has been trained to generate credible texts independently. In fact, the system can do one thing very well: predict the next word in a given sentence. As a result, GPT-3 can produce complete texts that sometimes resemble human texts. The computer system has no knowledge of the ‘meaning’ of words, but it is mainly good at making word predictions. So it is actually a super-sophisticated ‘auto-complete’ function, like the simple version you might recognize from your phone when you are typing. Although OpenAI has not offered the model’s code open source, it has built an API to experiment with the model’s capabilities.

 

In a Guardian newspaper article, the AI system wrote the following:

 

“Studies show that we cease to exist without human interaction. Surrounded by wifi we wander lost in fields of information unable to register the real world. As a new generation of cyberneticians keep watch, they see in our present age of the virtual a potential to transform the modern ‘cyborg’. Global cybernetics are already making it so.”

 

P.S. If you look up and read the Guardian article, please read my critique of this use of GPT-3 beforehand.

 

Risks of Deepfake

Fake information can be disruptive – examples include blackmail and reputation damage in politics, business and legal practice. But it can work out in many ways. People may claim, “I didn’t say that, that’s from a computer.” Besides, some people will be inclined to think: this information must be fake. If you can no longer trust information in general, society may become indifferent. This could threaten the democratic rule of law.

 

Suppose an audio recording emerged of the 2018 private meeting in the Finnish capital Helsinki between the then U.S. President Trump and his Russian counterpart Putin. This meeting took place behind closed doors, without assistants or note takers. If this recording contained compromising quotes on the part of Mr Trump, making it possible for him to be blackmailed, he would now be able to simply dismiss the recording as deep fake. He does not need to worry about anything.

 

Some are already warning about an Infocalypse, a destruction of the reliability of information. Photos, videos, sound recordings, human voices, written texts and written reviews we encounter everywhere. They could all be fake. Read below about the problems that deepfakes can cause.

 

Problems caused by Deepfakes

Now that deepfake videos, texts and audio clips can be created and distributed relatively easily and quickly, what problems could arise? Below is a brief, non-exhaustive, list.

 

  1. Unrest and polarization.

Suppose a deepfake video emerges of an important international politician who appears to have been bribed. Or one of those videos in which an FBI employee claims that the FBI would like to arrest someone from the Biden family for alleged ties to the Ukraine, and that they are generating evidence for this themselves. Or Russian fake videos showing a staged riot between British politicians, an anti-America demonstration in Saudi Arabia or Israeli soldiers burning a Koran. One can think of numerous examples where staged videos or sound recordings are guaranteed to cause geopolitical unrest or social polarization in society.

 

It could be an important strategy of one country to create division in another country that is considered an enemy. Polarization will thwart cooperation and togetherness in this country, which will weaken, as decision-making will be less effective. After all, the foundation of a democratic state is a shared perception of reality and a corresponding agreement on factual issues. When this is lacking, national problems could arise that are very difficult to solve afterwards.

 

Regimes can also use deepfake technology for their own propaganda, both to compromise political opponents or to portray their leaders favorably. Fake videos may show, for example, how these leaders got involved in difficult situations but turned out true heroes. Internationally, Iran, China, North Korea, the US and Russia seem to be particularly active in the development of this deepfake video technology.

 

  1. Blackmail

It is not inconceivable that politicians, journalists, foreign military, directors of large companies, whistleblowers and people responsible for finances will be blackmailed with deepfake videos in the future. One day, they may find the following email in their inbox:

 

“Hello, this is a link to an online video in which you play the lead role. You probably find it unpleasant when your sexual escapades are on display for the whole world to see. What would your family and friends think? It would probably harm your reputation. I advise you to comply with our [insert the criminals’ demands], otherwise et cetera.”

 

Even when the deepfake video is evidently of poor quality, the protagonist obviously would rather not have it distributed. The mere suggestion of unethical, criminal, deviant sexual behavior could result in considerable reputational damage and disgrace. It takes an enormous amount of time and energy to clear yourself of all blame; suggestions are persistent and may haunt you for the rest of your life. After all, outsiders will think: where there’s smoke, there’s fire. With deepfake technology, malicious people have a very powerful blackmail tool in their hands.

  1. Reputational Damage

One of the most obvious effects of deepfake technology is the infliction of reputational damage. Activist environmental groups could use it to cast directors of biotechnology companies in a bad light. Commercial companies could use the technology to bring down a competitor. On the night before an IPO, a video of a chief financial officer may surface, ostensibly admitting that there is far less cash on the balance sheet than the official paperwork indicates. On the eve of an election, a video may pop up in which a politician utters sexist, racist or aggressive language. If there is any rehabilitation at all, it will be too late to repair any electoral damage.

 

Especially people for whom reputation is very important will have to face the risks and solutions of this technology in the short term. Incidentally, at the beginning of the development of deepfake technology, a very vicious form of reputation damage already manifested itself on the Reddit forum: deepfake porn. With this application, women’s faces were pasted onto that of a porn actress. Reddit intervened and removed this ‘non-consensual pornography’ section, but this does not mean that the application no longer exists. As the use of such deepfake technology spreads, more and more women will become victims. There are already websites where female celebrities are manipulated in a pornographic setting. 

 

Another example involves Indian journalist Rana Ayyub. After campaigning for a rape victim, she appeared as the lead in a pornographic deepfake video. This was shared many tens of thousands of times and had a significant impact, both on her professional performance and on her as a person. The video was intended to inflict reputational damage on her because of her personal beliefs.

 

  1. Liars’ dividend and apathy

It is clear where the risks of deepfake technology lie. As this technology becomes easier to use, the general public will become increasingly accustomed to manipulated images, texts and voices. On the one hand, this is a good thing: when you receive an audio recording of an acquaintance asking you to transfer money, you would realize that the voice may be fake in most cases. However, getting used to deepfake technology has a downside. This downside may be that, with the surge of fake videos, fake items, and fake audio recordings, we inadvertently give malicious people an asset. This enables alleged perpetrators to simply dismiss evidence gathered by journalists, citizens or investigative agencies as deepfake video. The phenomenon that any incriminating information can be framed as a synthetically created fake video or audio recording is known as ‘liars’ dividend’.

 

The second downside of a possible wave of fake videos, fake articles and fake audio recordings is the risk that we, as a society, become apathetic to news. Because there are so many possible lies, the general public will shrug at any video or audio recording that reveals anything, even if it is based on facts. As soon as apathy towards all content sets in, journalism loses its important role in causing outcry when disclosing faults in big business and government. The most powerful journalistic weapon, exposure to the light of day, will lose its power in this case. This could seriously affect democracy and the collective moral compass.

 

A new reality

Although deepfake technology has a great deal of potential, it is still in its infancy. This gives us some moments of breathing space. We can get used to the idea that not all information is reliable. That we are being manipulated with types of content. A race has begun between real and fake. We will have to learn to live with that. Deepfake is a new reality.

 

Want to know more?

Do you want to know more about deepfakes, synthetic media and artificial intelligence? Then read the report ‘Deepfake Technology: The Infocalypse’ or book a lecture by Jarno Duursma on this topic.

 

Listen here to my interview on this topic with 3FM.

Listen here to my interview on this topic with NPO Radio 1.

Listen here to my interview on this topic with BNR.

Read my report ‘Deepfake technology: The Infocalypse’ here.

 

Listen here to my interview on this topic for Biohacking’s podcast.

My three publications on artificial intelligence.

(2017) The digital butler – (Management book Top 5 NL)

(2019) Machines with imagination: an artificial reality

(2019) Deepfake technology: The Infocalypse

 

Recente video's

Creating a look-a-like digital deepfake avatar

Creating a look-a-like digital deepfake avatar

My digital deepfake avatar presenter.

A digital copy of Jarno Duursma. What you see and hear is not a real human. This digital deepfake presenter is made with artificially intelligent software. Digital avatars will play an important role in the information services of the future.

Recente video's

Jarno Duursma in 1 minute

Jarno Duursma in 1 minute

This 1-minute video gives you an impression of what Jarno can mean for your (online) event, congress, relation day or training day.

He tells you about the latest technological trends and translates these into the impact on business, people and society. Both physical meetings and online!

Uitgelichte lezingen

Technology Trends 2030

1# Technology Trends 2030

What technology trends are relevant? What trends are going to make an impact in your industry? What do you really need to know? What is current and important for the future? That's what you'll hear in this lecture.

Do you want to surprise your business relations with a fascinating story that excites? Do you want to inspire your employees online, give them an 'online energy boost'? Then this lecture is perfect!

More information about this lecture

Receive updates

Would you like to be informed quickly and easily about what’s coming up?

Subscribe to my newsletter!

"*" indicates required fields