The dark side of deepfake technology

Part 1: potential Privacy Risks.  

Deep-fake technology has grown tremendously in popularity in recent years. With this widespread use of deepfake content, problems such as public manipulation, attacks on personal rights, violations of intellectual property rights and protection of personal data are becoming increasingly common. Lawmakers and Big Tech companies are looking for an effective solution against the growing problems of deepfakes. In the first part of this blog, we will explain to you what deepfake is and the risks it poses in terms of GDPR.

deepfake

What is Deepfake?

Deepfakes are audiovisual versions of fake news. Deepfake films begin with existing footage, which is then edited or manipulated by Artificial Intelligence (AI). As such, deepfakes are also a form of synthetic media. Deepfakes go beyond ordinary fake news articles and require sophisticated techniques Creating a deepfake is not something you do lightly. You need so-called “deep learning” software for it: software that can produce fake videos or fake audio clips via artificial intelligence.

If they are images of people, the software will teach itself how the person in question talks and moves. Afterwards, with that software and the accumulated knowledge that the software has, you can create a fake movie in which you can have this person make any statement or action.

From Reddit Experiment to Trend – More Accessible than Ever!

The phenomenon originated in 2017 on the online platform Reddit. A software developer posted photos and videos on the platform in which he replaced the faces of porn actors with the faces of some Hollywood celebrities.

After these creations spread rapidly on Reddit and beyond, deepfake has become a real trend. In addition, in the software also become simpler and more accessible For example, nowadays to create a deepfake one does not need to provide huge data sets, a single photo of a source is enough to create deepfake content.

Persuasive Examples and Dark Implications for Personal Data and Intellectual Property Rights

Some well-known examples of deepfakes involve Obama, Trump, Zuckerberg and Dalí. Moreover, they are quite convincing. In other words, it is difficult to distinguish which image is real and which was modified with deepfake.

However, Deepfake also has a dark side. In recent years, for example, it has often been used for revenge pornography and even child pornography.

Not surprisingly, it also raises all sorts of problems on the legal front, such as violations of personal data protection rights and intellectual property rights. While there are many problems with deepfakes, we will focus primarily on intellectual property rights and personal data protection rights related to deepfake content.

deepfake technologie

Deepfakes and their incompatibility with the GDPR?

In addition, the GDPR also provides that images are personal data if a natural person can be identified or authenticated from these images.

In order to process personal data, an appropriate legal basis must be present in the first place.

This means that taking photos or taking videos is processing personal data as described in the GDPR.

If you want to create a deepfake of someone you will have to provide the AI algorithm with photos of that person.

A first question to ask here is whether we can use that person’s photo without their consent and thus whether we should qualify photos as personal data as described in Article 4 of the GDPR.

It is important here to first cite the right to image. This right provides that permission is required for any human image and its use.

The GDPR lists 6 legal grounds under which you may process personal data, namely:

  • consent;
  • necessity (a processing constitutes an exception to the protection of personal data and its limitations must remain within the limit of what is strictly necessary, which is no more than the application of principle of minimum data processing mentioned in Article 5.c) GDPR):
  • the performance of a contract;
  • legal obligation;
  • protection of an individual’s vital interests;
  • public interest or the exercise of public authority;
  • pursuit of a legitimate interest.

Only one of legal grounds can be used in good order in the context of deepfakes can justify the use of images of a person, namely the prior explicit consent of the data subject.

A nuance between targeted and non-targeted images

Does this mean you always have to ask for permission if you want to use someone else’s photo? No, a little nuance needs to be made. If the images are targeted, you always need permission to take and use photos. Targeted images are images in which the person or persons in question have been taken in a targeted manner and are very clearly recognizable, for example, passport photographs. Journalists do not have to seek permission for these targeted images if the images are used in the context of news/reporting. For non-focused images, these are images that do not have the intention of clearly portraying certain people and are rather atmospheric images, you do not need explicit permission.

Challenges and Obligations for Users

In addition to obtaining consent, the GDPR imposes many other obligations that the uses of deepfake must take into account For example, the appropriate security measures for the data used must be guaranteed, transparency to the persons whose data are processed, limitation of the period for which these data are kept and, in general, respect for the rights of data subjects to withdraw their consent, right to access their data, their right to be forgotten, …

All these things make creating deepfakes in Europe challenging. Under the GDPR, deepfakes are basically only allowed if the data subject has given prior consent.

Deepfakes’ Gray Zone as Personal Data under the GDPR

Yet the situation is not so black and white. Some, however, believe that the fake images or videos cannot be seen in seen as personal data because in some cases it can no longer be attributed to individuals. However, this statement forgets that under the GDPR, personal data does not have to be objective. Subjective information such as opinions, judgments or estimates may also personal data are. That is why deepfakes well personal data can contain within the broader classification of personal data as defined in the GDPR.

GDPR Exceptions and Moral Considerations

Another important point is the fact that the GDPR only applies to information relating to an identifiable living person, and therefore information relating to a deceased person is not personal and therefore not covered by the GDPR.

This is a problem with deepfakes of deceased people. Your rights under GDPR expire after death.This means that the GDPR offers no additional protection when respectful deepfakes are made of a deceased person. One may question the morality of this.

Conclusie

Retrieved from GDPR area there are still a lot of unanswered questions surrounding deepfake, these questions can be placed in the bigre context of the AI discussion. In the second part of this blog we will discuss the issues surrounding authorsrights, portrait rights and consumer rights.

Delen:

Meer berichten

nis2 incident aangeven

To report an NIS2 incident

With the introduction of the NIS2 directive in the EU, cyber incident reporting will become mandatory for many companies. This means that

Partners

©DPO Associates Alle rechten voorbehouden. Privacy verklaringCookie verklaring | Algemene voorwaarden