What are GAN's?
In the recent years, deep learning has empowered the realm of computer vision, synthetic media – such as manipulating digital images, especially of human portrait images has improved rapidly and has achieved photo-realistic result in most cases.
One of the leading software known as DeepFaceLab (DFL) is an open- source Deepfake system for face swapping. This software has been widely used in creating imperative synthetic media, providing an easy-to-use pipeline for people to be able to use it without needing to have a comprehensive understanding of the learning framework and model implementation.
The main drivers behind creating Deepfakes are GAN and Auto Encoders:
GAN (Generative Adversarial Networks) works where two competing neural networks, namely a discriminator and generator are jointly trained, the generator model creates synthetic images using random noise while the discriminator is trained to distinguish between the real sample and the fake sample created by the generator model.
Another type of approach is through a special AutoEncoder called VAE (Variational Auto Encoders). Encoders typically consists of two components, an encoder and decoder. The autoencoder’s role is to learn to encode an input image into a low dimensional representation while the decoder reconstructs this representation back into the original image. These representations that are encoded contains pattern details related to different important characteristics of the original image such as expression.
DeepFaceLab is easily accessible and can even be offloaded and trained using GoogleColab and even locally, given that a good GPU is needed and a lot of patience and time. Pre-trained models are also widely available in different groups.
The process usually consists of 3 stages:
- Extraction Phases – where we prepare and clean our data for our dataset (Source, Destination and Model)
- Training Phase – where different columns are shown and iterated to shape the model into our desired subjects, wherein the columns represent the following:
- Source face sample
- Generated Source face from that sample
- Destination face sample
- Generated destination face from that sample
- Generated source face modeling the destination face sample
Typically, fair results are achievable in iterations starting from 100,000, however, the result will still depend on how consistent our dataset is (factors include: lighting, face profiles, resolution, age, angle, etc.)
3. Conversion Phase – where we use our model the create the mask
- Deepfacelab now has a GUI for this conversion where we can blur or erode mask to fit the generated face to the subject’s face model, each frame can be controlled to re-align the mask.
Deepfacelab’s interface is simple but is comprised of a lot of scripts wrapped in batch files which have preset commands which makes it have a lot of actions per process flow. Since using Deepfacelab takes time, patience, and a lot of data to achieve good results, few apps have emerged wherein fair results can be achieved in seconds and can be done simply by using a single image.
Facemagic, which is available on Android and iOS helps make this process achievable and easily by allowing users to use preloaded videos or upload custom videos from credits, compared to other applications, Facemagic allows the use of videos for up to 15 seconds and will soon have a 30 seconds option for twice the credits allowing users to thread multiple videos easily by video editing software such as Adobe Premiere Pro.
Image and video uploads are also available allowing users to try on different haircuts, fashion trends, and morph users almost seamlessly into scenes from iconic movies and TV shows with a single photo and even reanimate old, loved ones.
Facemagic is very intuitive to use and allows users to upload videos and store them in the cloud, compared to other applications, Facemagic has the capability that allows direct video uploads and cut video segments using a time window cutting tool at the bottom of the screen.
After uploading a video, users will have the option to choose several subjects from the clip and replace them with a photo.
In this example shown, I used a clip from South Korean TV series named Start-Up (스타트업), an Asian male face is used as a subject to change the face of the male actor in the clip.
Video with New Face:
Positive Use Cases of Deepfakes
While synthetic media has been controversial in the recent years citing security risks, the technology has also been an excellent enabler for new innovations and ideas, a lot of positive use cases have emerged because of the advancements in data science and artificial intelligence – these has given birth to possibilities and opportunities such as education, art, accessibility, business (such as Data Grid from Kyoto – a Japanese AI company that created an artificial intelligence engine that generates virtual models for advertising and fashion automatically) to even public safety and digital reconstruction of virtual crime scenes.
To this day, different applications have emerged from Deepfakes, the development of deep generative models raises new possibilities in different sectors such as healthcare. One project consisted of using GAN to create fake brain MRI scans, and researchers have found that by training algorithms on these medical images and a percentage of real images, these algorithms became just as good at spotting tumors as an algorithm which is trained only on real images.
Deepfakes have the ability to impact scalability, with the help of such applications, we can better share thoughts, films, arts and other creative works on a worldwide basis. Facemagic is a perfect application to improve the diversity and content consumption with lower budgets. It also allows people to live different lives during their one lifetime, everyone can be anyone and in this case, with new generation social networks, people can be both consumers and providers, Facemagic will allow users to have the technology to feel themselves in the content - at a push- button with no need for powerful hardware and expertise.
Suggestions and Download Links
We are always trying to improve our content and models, suggest the content you would like to see and future updates you want to have. Talk to us at firstname.lastname@example.org
Credit: @Sean C.