Nvidia has developed a new algorithm similar to deepfakes which can face-swap your pet’s cute face into different animal species or altogether a monstrous version of themselves.
The PetSwap project runs on the Few-shot UNsupervised Image-to-image Translation (FUNIT) algorithm. It is an image translation AI algorithm that uses a framework built on generative adversarial networks (GANs).
GANs are essentially the same framework that inspired deepfakes algorithm and is used by Nvidia to create faces of people who don’t even exist. It is also used to train autonomous vehicles in fake driving conditions.
Humans have this extraordinary capability of “generalization” where they can form a mental picture of what a previously unseen animal would look like in a different pose, “especially when we have encountered (images of) similar but different animals in that pose before.”
For instance, we humans can easily imagine what a sitting dog looks like when it’s standing up based on our general knowledge of other animals and their postures. However, machine learning algorithms lack this ability.
To fill this imagination gap, the researchers employed FUNIT to help the algorithm figure out what something might look like based on other examples.
Here’s one of the samples I tried:
After drawing a rectangle of the kitten’s face, just hit “translate.”
The PetSwap algorithm then gave me several mutant animals while morphing the cat’s face onto new bodies. I also tried uploading my picture just to see what happens, but the algorithm isn’t really meant for human faces.
Nevertheless, this project shows us the possibility that machine learning models can generate new images based on previous knowledge, instead of consuming huge amounts of data and save a lot of time.