How Many Images From the Iran War Are Fake? The Hidden Cost of Checking the Digital World

Scroll through social media during the latest Iran war and the images come thick and fast. Missile strikes lighting the sky. Fighter jets streaking overhead. Cities apparently reduced to rubble. But a growing number of those images share something unexpected: they aren’t real.

Investigators and journalists examining the flood of online content have already identified large numbers of images and videos that were never taken on a battlefield at all. Some were generated entirely by artificial intelligence. Others turned out to be old footage from previous conflicts, recycled and reposted as if it were happening now. Indeed, a surprising number of viral clips have even been traced back to video games and military simulators.

In previous wars the challenge was getting information out. Today the problem is something very different. We are drowning in information — and trying to work out which pieces of it actually belong to reality. Newsrooms, research groups and independent investigators now spend hours dissecting images that may have taken seconds to create. They analyse shadows and lighting, compare landscapes with satellite maps, and cross-reference buildings, roads and mountains with mapping software. Every pixel becomes a clue. And that takes time.

Which raises a question that sits quietly behind the excitement surrounding artificial intelligence. AI is supposed to save time, automate work and boost productivity. It can write reports, generate images, summarise documents and produce articles in seconds. But what happens when a growing share of that time saving has to be spent checking whether the output is real in the first place?

In journalism the effect is already obvious. Entire teams now exist simply to verify images and videos circulating online. In education, teachers spend time trying to determine whether essays were written by students or by algorithms. In offices everywhere, employees quietly review AI-generated reports for mistakes, invented facts or references that simply do not exist.

Economists have begun calling this the “AI tax.” The technology dramatically accelerates the creation of information, but it also creates a new layer of work devoted to verifying it. The irony is simple. Artificial intelligence can generate content in seconds. Proving that content is genuine can take hours. Creating a fake image is cheap. Proving it is fake is expensive.

And as the flood of questionable images from modern conflicts shows, a growing share of the productivity promised by artificial intelligence may end up being spent on one very human task:

checking whether the machines are lying.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top