With the rising threat of AI-enabled fake content, any internet user becomes a potential victim of clever impersonations. While 21st-century fraudsters are equipped to craft up synthetic identities from scratch, any business reliant on documentary evidence is vulnerable to fraud and extortion. Nonetheless, it should be noted that everyday people are also targeted by fraudsters who manipulate video and audio content to extract personal data. Living in the era of social media, where almost all of us have a digital footprint, it is much easier for fraudsters to collect data and generate real-like impersonations. Although experts are now developing machine learning methods to combat deepfakes, whether fighting AI with AI will bring deepfake crimes to an end remains mystery. Even if the answer is yes, perhaps it will be too late to restore trust in society, especially when it is no longer possible to believe what we see.

What is a Deepfake?

In 2017, an anonymous user of Reddit posted a few videos on the platform, constituting the first exposure to ‘deepfakes’ for many online users. One of the videos included the famous actress Scarlett Johansson’s face that was superimposed onto a porn actor’s body using advanced neural network-powered artificial intelligence, making the digital alteration or the ‘fake’ appear like a real-life-footage. [1] Deepfake technology is created to modify videos, imitate forgeries of people doing or saying things they have never done/said before, or create convincing synthetic audio. The ‘deep’ part of the popular technology comes from Deep Learning, ‘‘a subset of machine learning where artificial neural networks, algorithms inspired by the human brain, learn from large amounts of data.’’[2]Although such capabilities have existed long before that, synthesizing images and videos required tremendous effort by studios and experts. Today, machine-learning systems can easily produce deepfakes and various tools are now made available to ensure that such content is created faster and at a lower cost. For instance, the phone app Zao allows its users to superimpose their faces onto TV and movie characters that are listed on the system. Nonetheless, high-quality deepfake content that is much harder to spot is produced on high-end desktops with powerful graphics cards.[3]

The Rise of Deepfakes

The rise of AI-generated fake videos is becoming more widespread and convincing. The cybersecurity company Deeptrace has alerted the rapid growth in the amount of deepfake content found online as the figure increased 84 per cent within only seven months.[4] From Facebook CEO Mark Zuckerberg’s depiction of talking about how Facebook ‘has total control of billions’ stolen data’ to the altered video of Barack Obama calling Donald Trump a ‘dipshit’, AI-enabled deepfakes are continuing to terrify and amuse online users. While deep-learning algorithms used to create mind-bending alterations to a person’s face may lead to the worrying conclusion that we should not believe everything we see online, the possibility of deepfaking audio to create the so-called ‘voice clones’ may have us losing our faith completely. Today’s AI-equipped fraudsters combine machine learning technology with social engineering techniques to clone someone’s voice and trick people into carrying out money transfers. All that is needed to produce a voice clone is access to audio recordings of the target through collections from earnings calls, interviews, speeches, and even WhatsApp voice recordings. Recently, it was reported that a UK energy company’s employee was convinced into transferring €200,000 to cybercriminals who used AI to mimic the boss’ voice.[5] It was revealed that ‘‘the software was able to imitate the voice, and not only the voice: the tonality, the punctuation, the German accent,’’[6] reflecting how convincing highest-quality audio deepfakes can really be. BlackBerry group has also pointed out the severe implications of COVID-19 on deepfake frauds, arguing that less face-to-face interaction renders people more vulnerable to clever impersonations that trick them into authorizing money transfers or sending data to seemingly real people.[7]

Deepfake and Synthetic Identity Fraud

The rising threat of deepfake technology is posing a major cybersecurity threat. A recent study conducted by UCL ranked fake audio or video content the biggest AI crime threats,[8] stressing the worrying use of AI In fueling crime and terrorism. Given how difficult it could be to detect and stop fake content, how can one ever trust the audio or visual evidence found online? From discrediting public figures to manipulating decision-making processes through deceptive impersonations, deepfakes have a wide range of potential damage across many areas of life. Deepfake technology renders any business that relies on documentary evidence susceptible to fraud and extortion as fraudsters can craft up synthetic identities by combining stolen information such as natural insurance and social security number with a stolen photo and an email address.[9] Although today's credit agencies require further information such as social media activity before approving a profile, 21st-century fraudsters create fake social media accounts to bypass the security check. With AI-generated fake videos and audio becoming more mainstream, it is a matter of time before deepfakes are used to create lifelike selfies, fueling synthetic identity fraud. Upon the creation of the perfect synthetic identity, the dark side of artificial intelligence comes into play, creating the possibility of a completely fake identity opening a bank account, improving credit scores, and accessing money… A recent study has shown that even Mona Lisa could be brought to life from one single image as the painting was augmented by software engineers to assign different movements to Mona Lisa’s face, making the painting look like it is moving and talking.[10] In a world where even Mona Lisa could open a bank account, it is not surprising that deepfakes may destroy trust in society. In a survey carried out for iProov, it was revealed that 85 per cent of the consumers in the US and UK said ‘‘deepfakes would make it harder to trust what they see online, and nearly three-quarters said that made ID verification more important.’’[11] Although deepfake frauds predominantly target financial institutions, the risk they pose to individuals cannot be overlooked. Anyone of us could be a victim of the so-called ‘phishing’ scams that aim to extract personal data using impersonations that involve manipulated audio and video. Is it really your sibling on the phone asking for your personal information or perhaps a clever impersonation using AI to personalize the content with a reference that only your sister would know? The societal impact of deepfakes is huge and it seems like the most fundamental principle of humanity, and that is ‘believe what you see’, is forever removed from the frame.

Combating Deepfake Identity Fraud

The rising threat of AI-enabled fake content renders ID verification more important than ever in detecting and stopping deepfake fraud. Today, banks are partnering up with fintech groups to sign up for biometric identification systems. Last month, HSBC became the latest bank to incorporate the biometric checks developed by Mitek and offered through a partnership with Adobe. [12] The biometric identification system - adopted by Chase, ABN, Amro, Caixa Bank, Mastercard, and Anna Money- uses live images and electronic signatures to verify the identity of a customer.[13] Additionally, UK fintech iProov has recently established a new security centre to oversee bank authentication systems and offers its services to financial institutions including Rabobank, ING, and Aegon. [14]

Moreover, while the current deepfake detection techniques can identify tell-tale characteristics such as blurring or misalignment, boundary artifacts, and double eyebrows, deepfake technology is constantly evolving. Given its dynamic nature, experts suggest deep learning methods are better suited to combat the threat than traditional software, which requires to be rewritten by hand to adapt to new deepfake techniques.[15] Hence, developing an equally powerful AI tool becomes crucial in the battle against deepfakes. Siwei Lyu of the University of Albany points out the effectiveness of deep learning methods as ‘‘they learn classification rules from training data, and can be adapted to complex conditions in which the videos are spread, for example, through video compression, social media laundering and other counter-measures applied by the forgers.’’[16] Nevertheless, training machine-learning models require a great data set, huge numbers of deepfake videos, to train the AI algorithms on. To address this problem, the AI community including Facebook, the Partnership on AI, Microsoft, and academics from leading universities is establishing The Deepfake Detection Challenge, whereby a unique new dataset will be commissioned to accelerate the development of new methods to detect and block deepfakes.[17] However, whether the idea of ‘pitting AI against AI’ will manage to bring an end to deepfake crimes remains a mystery.

Even if the efforts of these tech giants pay off and their AI model overpowers the constantly evolving deepfake technology, the battle raises critical moral questions. In a world where each one of us is a potential victim of AI-enabled clever impersonations, it seems somewhat impossible to restore trust in society as seeing is not believing anymore.

Author: Gokcen Deniz Akduman

Editor: Bryher Rose


[1]  Jan Kietzmann and others, '(PDF) Deepfakes: Trick Or Treat?' (ResearchGate, 2019) <https://www.researchgate.net/publication/338144721_Deepfakes_Trick_or_treat> accessed 15 October 2020.

[2] Marr, B., 2020. What Is Deep Learning AI? A Simple Guide With 8 Practical Examples. [online] Forbes. Available at: <https://www.forbes.com/sites/bernardmarr/2018/10/01/what-is-deep-learning-ai-a-simple-guide-with-8-practical-examples/#397a4a198d4b> [Accessed 22 October 2020].

[3] Sample, I., 2020. What Are Deepfakes – And How Can You Spot Them?. [online] the Guardian. Available at: <https://www.theguardian.com/technology/2020/jan/13/what-are-deepfakes-and-how-can-you-spot-them> [Accessed 22 October 2020].

[4] Rachel Metz, C., 2020. The Number Of Deepfake Videos Online Is Spiking. Most Are Porn. [online] CNN. Available at: <https://edition.cnn.com/2019/10/07/tech/deepfake-videos-increase/index.html> [Accessed 22 October 2020].

[5] Drew Harwell, 2019. An artificial-intelligence first: Voice-mimicking software reportedly used in a major theft. [online] The Washington Post. Available at: < https://www.washingtonpost.com/technology/2019/09/04/an-artificial-intelligence-first-voice-mimicking-software-reportedly-used-major-theft/ > [Accessed 22 October 2020].

[6]ibid

[7] Ft.com. 2020. Banks Work With Fintechs To Counter ‘Deepfake’ Fraud. [online] Available at: <https://www.ft.com/content/8a5fa5b2-6aac-41cf-aa52-5d0b90c41840> [Accessed 22 October 2020].

[8] UCL News. 2020. ‘Deepfakes’ Ranked As Most Serious AI Crime Threat. [online] Available at: <https://www.ucl.ac.uk/news/2020/aug/deepfakes-ranked-most-serious-ai-crime-threat#:~:text=Fake%20audio%20or%20video%20content,to%20a%20new%20UCL%20report.> [Accessed 22 October 2020].

[9] Hendrikse, R., 2020. Synthetic Identity Fraud: How Even The Mona Lisa Could Open A Bank Account. [online] Forbes. Available at: <https://www.forbes.com/sites/renehendrikse/2020/08/28/the-mona-lisa-could-open-a-bank-account/#3c0243d63f69> [Accessed 22 October 2020].

[10] Metro.co.uk. 2020. Deep Fake Tech Brings The Mona Lisa To Life | Metro News. [online] Available at: <https://metro.co.uk/2019/06/03/deepfake-tech-brings-mona-lisa-life-9780172/> [Accessed 22 October 2020].

[11] Ft.com. 2020. Banks Work With Fintechs To Counter ‘Deepfake’ Fraud. [online] Available at: <https://www.ft.com/content/8a5fa5b2-6aac-41cf-aa52-5d0b90c41840> [Accessed 22 October 2020].

[12] ibid

[13] ibid

[14] ibid

[15] Thomas, E., 2020. In The Battle Against Deepfakes, AI Is Being Pitted Against AI. [online] WIRED UK. Available at: <https://www.wired.co.uk/article/deepfakes-ai> [Accessed 22 October 2020].

[16] ibid

[17] Ai.facebook.com. 2020. Deepfake Detection Challenge Dataset. [online] Available at: <https://ai.facebook.com/datasets/dfdc/> [Accessed 22 October 2020].


Further Reading:

https://www.forbes.com/sites/renehendrikse/2020/08/28/the-mona-lisa-could-open-a-bank-account/?sh=66ffada13f69

https://www.cnbc.com/2019/10/14/what-is-deepfake-and-how-it-might-be-dangerous.html

https://www.thetimes.co.uk/article/the-rise-of-deepfakes-what-are-they-and-how-can-we-know-whats-real-x03jp3rqr