Fake Media 2.0

How realistic looking videos distort the truth.

Reading time: 6 minutes

To the comments

What fake content does to us and how to identify.

The term "Fake News" is well-known and refers to misinformation, often intended to deliberately deceive others.
However, videos have now taken this concept a step further by not only spreading falsehoods but also presenting distorted or false realities through their content.

Learn in this article what kind of content this is, what impact it has on us and our society and how you can learn to spot it.

What’s the problem?

Social media is increasingly flooded with videos depicting dramatic eventssuch as plane crashes, combat scenes, or serious accidents.
In some cases, real events are portrayed in ways that create a false impression — for example, film sets where accident scenes are staged and then shared with misleading titles to generate clicks

Increasingly, however, these are highly realistic clips from video games or AI-generated images.
Technology is evolving rapidly. AI-generated content and high-resolution animations from video games are now so realistic that even trained eyes struggle to distinguish them from reality.

Examples include “fake hurricanes” that went viral on platforms like TikTok and Instagram during hurricane season; or plane crash clips sourced from simulations.

The result? Many viewers — often pressed for time or overwhelmed by the emotional nature of the content — fail to question what they see and share it further.

AI models like deepfakes enable the creation of deceptively authentic images and videos that are difficult to distinguish from genuine content.
This makes it harder for viewers to differentiate between real and fake content, increasing the risk of disinformation spreading.

Many viewers accept such content uncritically as true, as evidenced by the corresponding comments. With advancing technology, fake videos are becoming ever more realistic.

A study by the University of Zurich shows that AI-generated fake news is harder to detect than misinformation created by humans.

The spread of such content has significant societal consequences. It distorts perceptions of reality and can undermine trust in media and institutions. Additionally, such misinformation can incite panic or manipulate public opinion.

Germany’s Federal Ministry of Education and Research emphasizes the importance of strengthening media literacy to counter fake news effectively (BMBF)

The Realism of Deception: Fake News in a New Dimension

Technology is evolving rapidly. AI-generated content and high-resolution animations from video games are now so realistic that even trained eyes struggle to distinguish them from reality.

Examples include "fake hurricanes" that went viral on platforms like TikTok and Instagram during hurricane season or plane crash clips sourced from simulations.
Similarly, meticulously staged scenes from film sets, coupled with sensational titles, can give the impression of real incidents.

The result? Many viewers—often pressed for time or overwhelmed by the emotional nature of the content—fail to question what they see and share it further.
This fuels misinformation, influencing public perception and sentiment.

Impacts of Fake News

The spread of misinformation and fake videos on social media has both short-term and long-term effects on society and viewers' mental health.

  • Short-Term Societal Impacts:
    Fake videos depicting dramatic events can trigger fear and increase overall insecurity among the population.

    For example, AI-generated videos of natural disasters or accidents can cause panic, unnecessarily occupying authorities and tying up genuine resources.

    A study by the Bertelsmann Foundation found that 84% of Germans view disinformation as a major societal problem (Bertelsmann Stiftung).

    Such content can also be strategically used to influence opinions—be it politically, economically, or socially. 
    Misinformation can polarize debates and pit groups against one another.

    When people perceive different realities based on manipulated content, social cohesion weakens.

    Experts view the spread of disinformation as a threat to societal unity (Vodafone Stiftung).

  • Long-Term Societal Impacts:
    Over time, such misinformation erodes trust in media and public institutions.
    People may become generally skeptical of news, making it harder for trustworthy sources to be heard.

    If fake videos become commonplace, they may be perceived as normal or harmless.
    This leads to a desensitized society less attuned to the dangers of spreading lies and manipulation, with potentially dangerous consequences. 

    Misinformation can result in individuals or even governments making decisions based on false assumptions, whether in elections, policy-making, or societal issues.

    In the long run, constant exposure to misinformation can degrade the quality of public debates and weaken trust in democratic processes.
    For example, AI-generated images of politicians or celebrities in compromising situations can lead to baseless accusations and reputational damage (Correctiv).

    A study by National Geographic explores whether digital media endangers democracy (National Geographic, Quarks).


Psychological Impacts on Viewers:

Constant exposure to negative imagery, such as disasters, violence, or accidents, can elevate stress levels.
Regular consumers of such content may feel more fearful and less secure in their environment.

Studies show that frequent consumption of negative content can lead to depression, anxiety, and a skewed worldview.
Those regularly confronted with dramatic, manipulative images may perceive the world as more threatening than it actually is.

A WHO study found that more than one in ten young people exhibits problematic behavior related to social media use (WHO).

A report by AOK highlights the impact of intensive social media use on mental health, including the potential development of depression (AOK).

How to Spot Fake Videos

To protect against false information, it’s important to critically examine videos. Here are some tips:

  1. AI-generated videos often show detail errors, such as distorted faces, anatomical inconsistencies (e.g., too many fingers), or unnatural lighting and shadows. (BR24)
    Frame transitions can reveal issues like unnatural hair movements, shapes in ears, eyes, or hair, as well as deformations in glasses or earrings or inconsistent colors and textures in buildings, clothing, etc.

  2. While PC games are increasingly detailed and graphically realistic, they remain identifiable with closer inspection.
    One example is airplane accidents or “shocking” plane videos. Look at the differences between real life and video game:When dealing with the risk of AI-generated or artificially created content, the first step is to look for telltale signs.

Look at the differences between real life and video game.

PC on the left (watch for pixel edges, sharp shadows, unrelastic reflections) — real on the right (watch for soft shadows, realistic reflections, soft edges, realistic buildings)

The same goes for combat scenes from shooter games that are presented as depicting current and real war events.
Watch this video for example from France24.

3. Reading comments can also be helpful. While many viewers may treat the video as real, observant users often point out inconsistencies or identify the source material.

4. Sensational titles (e.g., “No one expected this…” or “Doctors don’t want you to know this trick”) should always be approached critically — or, ideally, not clicked on at all.

5. Tools like Google Reverse Image Search or InVID can help trace content back to its original source.
Organizations like Fraunhofer AISEC are developing software to detect deepfakes and curb their spread.

Critical Thinking Matters

In a world where visual content increasingly shapes our perception of reality, it is crucial to remain critical.
Don’t believe everything you see. Question it. Educate yourself. Share only what you personally consider reliable.

Media literacy plays a vital role in this. Awareness of the potential for manipulation and the ability to critically analyze content are essential to avoid becoming a victim of disinformation.

A call to Creators

Content creators bear a special responsibility. The intentional spread of misleading videos can cause significant harm.

To all who create and share content: You have a responsibility! Before uploading a video, carefully verify:
Is the source clear? Especially when sharing third-party content, ensure its accuracy and reliability.
Could the title or context be misinterpreted? Even real events (e.g., footage from a film set) must be presented transparently to avoid misunderstanding.

One good example is this video on Youtube, showing a burning plane. While most of you will identify it as in game footage, the headline pretends to provide an emergency video of an airplane.

Clicks and reach must never outweigh integrity. Every piece of misinformation — intentional or not — undermines trust and harms society as a whole.

Now to you

What do you think about this development? Do you find it easy to recognize AI generated content? What do you recommend to identify fake content?

Leave comments below to help our community and readers.

Share this article to raise awareness about responsible media consumption.
Together, we can break the power of misinformation and foster a positive, fact-based environment.

If you liked this article, leave a like.

Leave me a thumbs up

Zurück
Zurück

Visualization

Weiter
Weiter

Discipline