The rise of generative AI has made creating realistic fake images, videos, and audio much easier. Tools like DALL-E and Midjourney allow almost anyone to generate highly convincing deepfakes with minimal effort. This rapid development has led to a surge in AI-generated media, often leaving viewers questioning the authenticity of what they see online.
Deepfakes are becoming a serious issue, particularly in how they are used to spread misinformation, manipulate public opinion, or commit fraud. Whether it’s images of celebrities like Donald Trump or Taylor Swift, or more targeted uses in political campaigns, these fakes can have real-world consequences.
Key Signs of a Deepfake
In the earlier days of AI deepfakes, the technology was easier to spot. Errors like unrealistic features, such as people with six fingers or glasses with mismatched lenses, were common. However, as AI systems have improved, spotting deepfakes has become far more challenging.
A common giveaway is the unnatural smoothness of AI-generated images. Skin in deepfake photos often appears overly polished or airbrushed, lacking the texture or imperfections typical of real human skin. Similarly, lighting and shadows in these images may be inconsistent, especially when comparing the subject to the surrounding background.
Face-swapping, one of the most prevalent forms of deepfake manipulation, requires a closer look at facial details. For example, mismatches in skin tone between the face and neck, or unnatural blurring around the edges, can reveal a fake. In videos, check the synchronisation between lip movements and audio, as deepfakes may struggle with perfectly aligning speech.
Another subtle tell is the quality of teeth in images or videos. AI models sometimes fail to generate individual teeth with clarity, resulting in a blurry or unnatural appearance. This can be especially evident when the fake person is speaking or smiling in videos.
Verifying Deepfake Content
When faced with potentially misleading images or videos, context is crucial. Consider whether the situation is plausible. Is the person depicted behaving in a manner consistent with their usual character? For instance, fake images like the one showing the pope in a luxury puffer jacket might be suspiciously out of character.

As deepfakes become more difficult to spot, technology is also being developed to counteract the threat. AI detection tools, like Microsoft’s authenticator and Intel’s FakeCatcher, analyse images and videos for signs of manipulation, giving users a confidence score about whether the content is genuine. However, these tools are not always widely available and have limitations.
The Future of Detecting Deepfakes
The challenge of identifying deepfakes will only increase as AI technology continues to advance. AI models are becoming more sophisticated, producing increasingly realistic content with fewer flaws. While detection tools may improve, relying solely on them could give people a false sense of security.
As AI-generated content becomes more widespread, it is important for both individuals and organisations to remain vigilant. Public education about the dangers of deepfakes and how to recognise them will be essential. However, as AI capabilities evolve, even the most experienced observers may struggle to distinguish between real and fake content.