Skip to content
logo The magazine for digital lifestyle and entertainment
Artificial intelligence Evergreener Right Tips and tricks All topics
Growing Danger

How to Check if Your Photo Is Being Used for Deepfakes

Deepfakes: Your Own Image in the Digital Realm
Your image in deepfakes—a chilling thought. You can find out if it's a reality. Photo: Getty Images
Share article

March 20, 2026, 10:28 am | Read time: 4 minutes

We often hear about celebrities who have fallen victim to deepfakes. However, the more recognizable the face, the quicker the misuse of manipulated images or videos comes to light. This, in a way, makes “ordinary” people even more appealing targets for fake erotic or other types of recordings. Sometimes, a targeted attack—such as for revenge or fraud—is behind it. TECHBOOK explains the best ways to find out if your own image is being used for deepfakes.

Deepfakes – Fake Media Content With Potential Dangers

Generative artificial intelligences (AI) are now so powerful that their deepfakes—the umbrella term for AI-created or manipulated material—often appear deceptively real. This opens up various opportunities but also increasingly poses dangers. TECHBOOK has already covered these extensively in another article. A negative example is the market for fake pornographic content, into which celebrities like Taylor Swift have involuntarily fallen.

But it’s not just celebrities; “ordinary” people are increasingly becoming the stars in deepfakes. The motives can vary: from revenge porn, where criminals aim to harm their (mostly female) victims, to compromising material used for bullying, to identity theft for fraudulent purposes.

According to the Federal Office for Information Security (BSI), one of the most important measures against deepfake attacks is to specifically educate people whose images could potentially be misused. The agency points to common tips for recognizing AI-generated video and image material. There are also special deepfake detection tools (such as Reality Defender, Hive Moderation, and Attestiv) that react to synthetic features and can thus indicate manipulation. However, they do not personally help in finding out if and where your own image might be found.

More on the topic

Finding out if Your Own Image Is Used in Deepfakes

It must be acknowledged that there is currently no absolutely reliable method to definitively prove unauthorized use of your own image by AI. The most likely way to succeed is through a reverse image search. For this purpose, there are various tools, such as Google Lens or TinEye. PimEyes is considered particularly powerful—the service even shows (in the paid version) manipulated or edited versions of the found face. The programs generally work by having users upload the potentially misused photo or enter the URL of a website where the image is published. The services then search the web for similar images.

The messaging app WhatsApp has also offered a quick way to perform reverse image searches for some time.

Preventing the Misuse of Your Image

What can be fairly well checked is whether your own image has ended up in publicly accessible AI training datasets. It is clear that for an AI to generate convincingly real images or videos, it must first be trained with data from the web, meaning photos from publicly accessible websites or social networks. And in this way, they can logically also find their way into generative AI models.

The service Have I been trained can help users intervene in time. Users only need to enter their own photo, a text description of the image, or the URL where the image can be found. The tool then searches the image datasets often used for AI training. If it finds something, you can object to the use of your own image.

If you actually find deepfake recordings of yourself on the internet, you should act quickly. TECHBOOK has spoken to a lawyer about the misuse of AI-generated content, who explains the rights of those affected. Depending on the case, you can take civil or even criminal action against the distribution of such content.

This article is a machine translation of the original German version of TECHBOOK and has been reviewed for accuracy and quality by a native speaker. For feedback, please contact us at info@techbook.de.

You have successfully withdrawn your consent to the processing of personal data through tracking and advertising when using this website. You can now consent to data processing again or object to legitimate interests.