Imagine a future where artificial intelligence (AI) helps you in amazing ways every day! From helping you write stories to showing you realistic images and videos that resemble the real world, AI is doing things we once thought only humans could do. It’s like magic! But there’s more: he can learn, adapt and get better at every task.
What if you could ask a computer to make a video of something that never happened? Or create smart assistants that can talk just like your grandparents? While this sounds fun, it can also be a bit scary when people use these tools to trick us.
Deep fakes are fake videos, images or even voices created by artificial intelligence deep learning models that look and sound real. What will you do if your fake video is uploaded to discredit you?
Evolution of deepfakes
Fakeness has come a long way, from manual editing to AI-based synthetically generated voices, images and video. Initially, AI could only swap faces in photos or videos. Later it became smart enough to change voices using “speech-to-text” and voice AI. Now, with language models (like those used in chatbots), AI can even conduct fake conversations! Additionally, some models can switch faces during live video calls. If you get a video call from someone you trust, but it’s not really them talking, how can you tell what’s fake and what’s real?
Challenges in deepfake detection
As deepfake technology advances, it becomes difficult to spot the fakes because they are more realistic. A new type of AI technology called diffusion models (DMs) makes this even more challenging. Unlike older methods like GANs (Generative Adversarial Networks), DMs create highly realistic photos and videos, making it harder to detect what’s fake. Researchers must now find new ways to discover deepfakes generated by these models, because they behave differently and have unique characteristics.
Another major challenge is that detecting deepfakes requires a lot of computing power. For example, analyzing a high-quality video with AI takes much longer than just watching it, making real-time detection very difficult.
Moreover, it is difficult to tailor detection methods to privacy concerns. Some people worry that aggressive deepfake detection could inadvertently violate privacy or falsely accuse innocent people of creating fake content, which has happened in some lawsuits.
According to The GuardianOne well-known case involved a woman in Pennsylvania, USA, who was accused of faking an incriminating video of teenage cheerleaders to harm her daughter’s rivals. She was arrested, publicly ostracized and convicted for allegedly creating a malicious deepfake. However, upon further investigation, it was revealed that the video had never been altered in the first place: the entire accusation was based on misinformation. This case highlighted the risk of genuine content being wrongly identified as fake. Lawyers also claim that real videos are deepfakes to save their clients.
The solution: detect deepfakes
Researchers are working hard to find ways to discover deepfakes! AI detectives can now scrutinize videos frame by frame to spot small errors that reveal whether a video is fake. They check for things like strange eye movements or changes in lighting that don’t seem natural, in short, they check physical characteristics. Some AI systems are even trained to see how sound matches the lip movement of the person in a video.
There have even been successful cases where deepfakes have been discovered.
For example, according to TOIIn a recent case, a man fell victim to a deepfake trap where AI-generated explicit videos were created using his likeness. The perpetrators blackmailed him and threatened to leak the fake videos unless he paid for them. The victim was so distraught that he almost committed suicide before reporting the crime. This case is one of the first of its kind in India and highlights the devastating personal impact deepfakes can have if used maliciously.
Similarly, some deepfake videos of celebrities and political leaders were made public because AI was able to spot the fakes before most people noticed anything wrong. Some of the promising detection tools are:
- Sentinel: Focuses on analyzing facial images for signs of manipulation.
- Attestiv: Uses AI to analyze facial images and find fakes.
- Intel’s Real-Time Deepfake Detector (FakeCatcher): Detects deepfakes in videos in real time.
- WeVerify: This tool analyzes social media images for signs of manipulation.
- Microsoft’s Video Authenticator: Can check both images and videos for deepfakes.
- FakeBuster: Used Screen Recording of Video Conferencing for Training, a tool from Indian Institute of Technology (IIT) Ropar in 2021, verifies the authenticity of people in video calls.
- Kroop AI’s VizMantiz is a multi-modal deepfake detection framework for banking, finance and insurance industries and social media platforms, developed by a Gujarat-based Indian startup.
How academic institutions and companies help
Many technology companies are stepping in to help detect deepfakes.
Big names like Facebook, GooglingAnd Microsoft create tools that can scan videos on their platforms to find fakes before they spread. Microsoft’s video authenticator is an example of this. Google’s SynthID identifies and watermarks AI-generated content. These companies are also working with researchers to make AI better at detecting deepfakes faster.
Furthermore, the Massachusetts Institute of Technology (MIT) launched a fake video detection website, which uses artifact detection using facial analysis, audio-video synchronization and audio analysis.
The role of governments
Governments are stepping in to help protect people from the risks of deepfakes. In 2018, the US passed the Malicious Deep Fake Prohibition Act, which punishes those who use deepfakes to cause harm. Many other governments are also working on laws and policies to make it harder for people to create deceptive fake videos.
However, an important balance must be struck. Video generation and face-swapping technologies are tools; they can be used or abused. Rather than banning these developments, governments should focus on punishing those who misuse them for malicious purposes, while encouraging the development of useful applications of the technology. In addition, governments are considering regulations that would require companies to label AI-generated content clearly. This way, the audience can immediately know whether what they are seeing is real or artificial.
Governments also play a key role in public awareness. They can help people always be cautious and ‘verify first’ before believing or sharing suspicious videos. By working closely with technology companies and research institutions, governments can ensure that deepfake detection tools are used safely, effectively and responsibly to ensure public trust and media integrity.
Way forward: bright future of AI
Deepfakes are just a small problem in the vast ocean of AI challenges. As AI continues to evolve, new hurdles will arise, but also new opportunities. The application of AI is progressing so quickly that it could lead humanity to the next stage of evolution. By tackling issues like deepfakes head-on, we equip ourselves to tackle similar challenges that will undoubtedly arise in the future.
While deepfakes are challenging, the future of AI looks incredibly bright, a world where AI helps people create amazing art films or even discover new solutions to complex problems. If we can learn to deal with the dangers posed by its misuse, such as deepfakes, AI will continue to enrich our lives in exciting and transformative ways.
Ultimately, as AI grows, we need to use it for good. If we do, AI will help us achieve a future full of possibilities we can’t even fathom.
(Rahul Prasad is co-founder and CTO of Bobble AI, an AI keyboard platform.)
(Disclaimer: The views and opinions expressed in this article are those of the author and do not necessarily reflect the views of YourStory.)