How Deepfake Works

Pubudu Priyanga Liyanage, Chirath De Alwis, Shameen Samarawickrema, Buddhi Nayani Perera

Deepfake is a technique that involves using deep learning algorithms to create realistic video or audio content that appears to show someone saying or doing something they never actually did. Creating a deepfake usually involves two steps: first, training a deep learning model on large datasets of real footage, and second, using that model to generate new content [1].

The code used to create deepfakes varies depending on the specific techniques and models being used, but the following steps can be introduced as the common ones:

Data collection: Since deepfake algorithms are Neural Networks (NN), they need a huge amount of labeled data to be trained properly. Typically, these data are collected from sources such as YouTube or other video-sharing platforms [1].

Preprocessing: Before training the deep learning model, the collected data is preprocessed to extract important features and prepare it for training. This may involve tasks such as aligning faces, removing backgrounds, or adjusting lighting [1].

Model training: The deep learning model is trained on the preprocessed data using a technique called supervised learning. The model is given pairs of real and synthetic images or audio samples and learns to generate new content that is indistinguishable from the real footage [1].

Post-processing: After the model has been trained, the generated content is fine-tuned to improve its realism. This may involve techniques such as smoothing edges, adjusting color balance, or adding noise [1].

Integration: The final step involves integrating the deepfake content into a video or audio clip. This may involve techniques such as face-swapping or lip-syncing to make the deepfake content appear as though it is part of the original footage [1].

Let’s see what actually happens inside the deepfake model

The most effective neural network-based design for creating deepfakes makes use of Generative Adversarial Networks (GANs). It is a generative machine learning paradigm that combines pre-existing methodologies to provide both current and novel ideas and outputs. GANs can produce remarkable outcomes by mimicking samples they have previously been exposed to create new objects (images, words, audio, etc.) [2]. 

Deep learning techniques are built around Neural Networks (NNs), which are a subset of machine learning. A node layer of an artificial neural network (ANN) consists of an input layer, one or more hidden layers, and an output layer [3]. Neural networks (NNs) were created as models for classification and prediction. These are strong non-linear optimizers that can be trained to evolve their hyperparameters (neuron weights) to match the training set of data. As a result, the NN will be able to predict and categorize similar types of data sets [4]. 

A GAN is always divided into two neural (deep) networks. The first, sometimes referred to as the discriminator, is trained to identify a collection of data apart from pure noise. This first "discriminator" part of the GAN is a standard network that has been taught to categorize data. The output is a yes/no flag, whereas the input is an example of the data we wish to produce [4].

The generator, which makes the kind of data the discriminator is trained to recognize, is the other network. This result is generated by the generator using a random input. Although the generator is trained to backpropagate the information whether or not its output is similar to the required data, initially, this will create a random output [4].

Then, the discriminator receives the predictions from the generator in order to achieve this. The latter is trained to identify actual items, so if the generator can imitate a certain object convincingly enough to fool the discriminator, the GAN can create fake images of that object that a skilled viewer would mistake for the real deal [4].

Deepfake Detection 

Let's talk about detecting deepfakes. The method of using deep learning algorithms and analytical skills to determine if a given video or audio recording has been altered or synthesized is called deepfake detection. This is very crucial as deepfakes are increasingly being exploited by bad actors to spread disinformation, political propaganda, or fake news.

Here are some methods that can be used to detect deepfake videos:

Human body movements: The movements of the body in the videos are analyzed by deepfake detection algorithms and analysts to see if they are abnormal or inconsistent with human behavior. These minor motions might involve head twists, eye blinking, and expressions on the face [5].

Digital anomalies: Deepfake videos may have anomalies like uneven lighting, irregular shadows, face-edge blurring, or pixelations. These anomalies or abnormalities may be detected by detection algorithms and analysts, which can then use them to determine if a video is likely to be a deepfake [5].

Sound anomalies: Deepfakes frequently contain artificial audio produced by a neural network. The audio track can be examined by detection algorithms or analysts for anomalies or abnormalities that can indicate fake generation [6].

ML model reverse engineering: Just like deepfakes are produced by Machine Learning (ML), by examining big datasets of both genuine and synthetic movies, deep learning models may be trained to recognize deepfakes. These algorithms are able to spot deepfake-specific patterns in the data and utilize them to discriminate between authentic and fake films [6].

It's important to note that deepfake detection remains an active research area, and very sophisticated deepfakes crafted with evasion in mind can fool detection systems. Therefore, it is important to exercise caution when judging the authenticity of media content and seek out reliable sources.

It's worth noting that while deepfake technology can be used for creative or entertainment purposes, it also raises important ethical and social concerns around the potential for the misuse of synthetic media, particularly in the context of disinformation and propaganda.



M. Somers, "Deepfakes, explained," MIT Management, 21 July 2020. [Online]. Available: [Accessed 15 February 2023].


D. Sblendorio, "How To Build A Generative Adversarial Network (GAN) To Identify Deepfakes," ActiveState, 18 March 2021. [Online]. Available: [Accessed 15 February 2023].


"What are neural networks?," IBM, [Online]. Available: [Accessed 16 February 2023].


P. Caressa, "How to build a GAN in Python," CODEMOTION, 15 May 2020. [Online]. Available: [Accessed 17 February 2023].


M. N. N. B. M. A. H. S. MD Shohel Rana, "Deepfake Detection: A Systematic Literature Review," IEEE Access, vol. 10, pp. 25494 - 25513, 2022. 


M. A. M. S. T. Nguyen, "Capsule-Forensics: Using Capsule Networks to Detect Forged Images and Videos," IEEE Transactions on Information Forensics and Security, vol. 13, no. 8, pp. 2074-2089, August 2018. 

About the Authors

Chirath De Alwis is an information security professional with more than 9 years’ experience in the Information Security domain. He is armed with MSc in IT (specialized in Cybersecurity) (distinction), PgDip in IT (specialized in Cybersecurity), BEng (Hons) Computer networks & Security (first class), AWS-SAA, SC-200, AZ-104, AZ-900, SC-300, SC-900, RCCE, C|EH, C|HFI and Qualys Certified Security Specialist certifications. Currently involved in vulnerability management, incident handling, cyber threat intelligence and digital forensics activities in Sri Lankan cyberspace.  Contact: [email protected]

Pubudu Priyanga Liyanage is a Cybersecurity undergraduate at the Sri Lanka Institute of Information Technology (SLIIT). He has certifications of SC-900, NSE-01, NSE-02, and Qualys Certified Security certifications. Currently involved in vulnerability management, digital forensics, penetration testing, application security, and SOC analyst activities in the cybersecurity field. Contact: [email protected]

Shameen Samarawickrema is a business analyst (with cybersecurity training) with over 3 years of experience armed with a bachelor’s degree focused in Business Information systems from Cardiff Metropolitan University, dual major HND/HD in Software engineering & Computing, SC-900, NSE-01, NSE-02, NSE-03, CISCO, PMI and Qualys certifications. Currently involved in as BA and presales engineer, vulnerability management, threat intelligence, digital forensics, SOC analyst, and application security activities in Sri Lankan cyberspace. Contact: [email protected] 

Buddhi Nayani Perera is an Information Technology undergraduate at the Sri Lanka Institute of Information Technology (SLIIT). She has certifications in SC-900, NSE-01, NSE-02, and Qualys Certified Security certification and she has presented her research at the 2022 4th International Conference on Advancements in Computing (ICAC) conference and published her research article in IEEE Xplore digital library. She’s currently involved in application security, threat intelligence, and digital forensics activities in the cyber security field. Contact: [email protected] 

March 27, 2023
Notify of

Inline Feedbacks
View all comments
© HAKIN9 MEDIA SP. Z O.O. SP. K. 2013