Deepfakes: From Princess Leia to Tangible Values

The phenomenon known as Deepfake became popular in movies and has become a tool capable of affecting the economic stability of companies.

In 2016 the Star Wars movie Rogue One shocked fans of the saga by the cameo of actress Carrie Fisher, but not the Carrie Fisher of 2016, in the movie the actress appears as she looked in 1977, an image that differs widely from hers 39 years after the first film was released.

The problem with Deepfake, however, is not its legitimate uses, it is not a problem for us that Lola Flores stars in a Cruzcampo advertisement almost ten years after her death, nor that Princess Leia still looks the same almost forty years later, that is the magic of cinema, we know there is a trick, even if we do not understand which one.

However, the deception is no longer a little radio prank of an Orson Welles on duty, artificial intelligences have been perfected to the point of being able to build a reliable image for the viewer and the artificial intelligences in charge of the verification processes.

Moreover, the ideological polarization of the 21st century, led by the monohedral discourses of social networks, has ultimately led to the abolition of the critical spirit and the rise of the non-rationalized argument of authority.

The logical evolution of the Deepfake phenomenon has shaped the values and ideas that, being abstract, have acquired the faces and voices of the most observed people, thus the paradox could occur of seeing our favorite actor or actress giving opposite messages to their audience of one ideology or another; We could see Spanish politicians of one party or another encouraging social revolts or claiming to have participated in a non-existent scandal, and even the CEO of our company claiming to collaborate with a terrorist organization, in short, the uncontrolled Deepfake can become a political weapon.

But how can I detect if the speaker of a speech is not who he or she appears to be?

Ramon Arteman, president and director of the company Metropolitana de Muntatges specialized in digital postproduction and VFX for advertising, developed a guide for the detection of Deepfakes, the main points to identify in this case would be:

  • Evaluate the source of the information and identify massive resends.
  • Shadows and lighting discordant with the environment and uneven skin tones on the body.
  • Audio quality.
  • Strange messages should make us suspicious of the reliability of the message.
  • Deep Fakes are usually short videos, due to the time it takes to produce them.

According to the U.S. Government Agency GAO, the following points should also be considered for manual identification:

  • Inconsistent blinking.
  • Irregular ears.
  • Presence of features with low definition.

Similarly, some of the operations for the detection of Fake News can be extrapolated to this new area, the following points have been extracted from an article published in by Telefónica and another article by the BBC and are also applicable to the phenomenon of Deepfakes:

  • Assess the emotional impact of the news.
  • Evaluate if the news confirms any previous bias.
  • Google search to detect if it has been disproved.

Vestigere offers Digital Surveillance (DS) services, based on the early detection of smear campaigns and brand image monitoring, thus constituting a tool focused on reducing uncertainty and providing useful information for strategic decision making.


  • Bañuelos, J. (2020). Deepfake: La imagen en tiempos de la posverdad. Obtenido de Revista Panamericana de Comunicación:
  • Canfranc, P. R. (s.f.). Deepfake, cuando lo que vemos ya no es de fiar. Obtenido de Fundación Telefónica:
  • Castillo, F. (2020). Blogthinkbig de Telefónica. Obtenido de
  • GAO. (2020). Science & Tech Spotlight: Deepfakes. Obtenido de GAO:
  • Gragnani, J. (2018). Guía básica para identificar noticias falsas (antes de mandarlas a tus grupos de WhatsApp). Obtenido de BBC:
  • Juste, M. (2021). Deepfake: ¿Es posible detectar un vídeo falso? Obtenido de Expansión:
  • Libby, K. (2021). El Deepfake de Lola Flores mola mucho… pero podría ser muy peligroso. Obtenido de Esquire:
  • Maksutov, A. A., Morozov, V. O., Lavrenov, A. A., & Smirnov, A. S. (2020). Methods of Deepfake Detection Based on Machine Learning. Obtenido de IEEExplore:
  • Mora, A. (2021). Lola Flores y el deepfake: ¿a favor o en contra? Obtenido de PCWorld:
  • Nguyen, T. T., Nguyen, C. M., Nguyen, D. T., Nguyen, D. T., & Nahavandi, S. (2020). Deep Learning for Deepfakes Creation and Detection: A Survey. Obtenido de
  • Paris, B., & Donovan, J. (2019). Deepfakes and Cheap Fakes: Manipulation of audio and visual evidence. Obtenido de Data & Society:
  • Robles, M. (2021). Deepfake: ¿amenaza para el orden mundial o futuro del entretenimiento? Obtenido de El Economista:
  • Rodriguez, A. G. (2021). Deepfakes, entre resucitar a Lola FLores y amenazar la democracia. Obtenido de El Orden Mundial:
  • Vega, G. (2021). Una campaña de Cruzcampo con un “deepfake” de Lola Floress se hace viral. Obtenido de El País: