Why Your Brain Can No Longer Trust What It Sees Online
Technology

Why Your Brain Can No Longer Trust What It Sees Online

Synthetic media is spreading faster than fact-checkers can respond. Here's why your ability to spot fake content is failing—and what you can do about it.

By Mick Smith8 min read

The Speed Game Synthetic Media Is Winning

One outlet with reported ties to Iran, operating under the name Explosive News, can reportedly produce a two-minute synthetic Lego-style video segment in roughly 24 hours. That turnaround time is not a coincidence—it is the entire strategy. Fabricated media does not need to withstand long-term scrutiny. It simply needs to circulate widely before anyone with the tools and training to challenge it gets the chance.

This dynamic played out in an unexpected way last month when the White House published two cryptic "launching soon" videos, only to pull them after independent researchers and open source investigators began picking them apart. The anticlimax was considerable: the clips turned out to be promotional material for the official White House mobile app. Yet the incident revealed something significant about how institutional communication has evolved. Official channels have adopted the visual language of leaks, platform intrigue, and viral mystery—blurring the line between authentic government messaging and something engineered to look deliberately obscure.

When even verified official accounts borrow the aesthetics of leaked footage, the only rational response is to question everything.

Real vs. Synthetic: The Signal Has Flipped

For years, a clean digital footprint was a reliable marker of authenticity. Content with no traceable history, no metadata trail, no prior appearances online was generally assumed to be original. That assumption no longer holds. The absence of a digital trail today may simply mean the content was never captured by a camera at all—it may have been generated entirely from scratch.

The scale of the problem is difficult to overstate. According to the 2026 State of AI Traffic and Cyberthreat Benchmark Report, automated traffic now accounts for an estimated 51 percent of all internet activity and is scaling at roughly eight times the rate of human-generated traffic. These automated systems do more than distribute content—they actively favor low-quality viral material, ensuring fabricated media reaches mass audiences well before any verification effort can catch up.

Open source investigators are doing what they can, but they are fighting a volume war they did not sign up for. The emergence of so-called "super sharers"—highly active accounts, often bolstered by paid verification badges—adds a veneer of false authority that traditional open source intelligence, or OSINT, must now work around.

"We're perpetually catching up to someone pressing repost without a second thought," says Maryam Ishani, an OSINT journalist covering active conflicts. "The algorithm prioritizes that reflex, and our information is always going to be one step behind."

When Verification Itself Becomes a Problem

The surge in war-monitoring accounts and aggregated conflict content on platforms like Telegram and X is creating a new and more insidious problem: it is beginning to distort reporting from within. Manisha Ganguly, visual forensics lead at The Guardian and an OSINT specialist who investigates war crimes, warns that the sheer flood of compiled content risks manufacturing false certainty rather than dismantling it.

"Open source verification starts to create false certainty when it stops being a method of inquiry—through confirmation bias, or when OSINT is used to cosmetically validate official accounts or knowingly misapplied to align with ideological narratives rather than interrogate them," Ganguly explains.

Compounding the problem, the tools that investigators rely on are becoming harder to access. On April 4, Planet Labs—one of the most heavily used commercial satellite imagery providers in conflict journalism—announced it would indefinitely suspend the release of imagery covering Iran and surrounding conflict zones, retroactive to March 9, following a direct request from the US government. US Defense Secretary Pete Hegseth was blunt in responding to concerns about this restriction: "Open source is not the place to determine what did or did not happen."

The consequences of that position are significant. When access to primary visual evidence is curtailed, independent verification becomes structurally harder. And in that narrowing space, generative AI does not simply fill a gap—it competes to define what version of reality people see in the first place.

AI Fakes Are Getting Harder to Catch

Generative AI platforms have been quietly correcting the flaws that once made synthetic images easy to identify. Henk van Ess, an investigative trainer and verification specialist, notes that the classic giveaways—wrong numbers of fingers, distorted protest signs, garbled text overlays—have been largely addressed in the most recent generation of image models. Tools such as Imagen 3, Midjourney, and DALL·E have made notable advances in photorealism, prompt interpretation, and the rendering of legible text within images.

But the more difficult challenge, van Ess argues, is what he describes as the hybrid.

In hybrid manipulation, 95 percent of an image is entirely genuine: real metadata, authentic sensor noise, accurate lighting physics. The fabricated element occupies a single detail—a patch stitched onto a uniform, a weapon placed into someone's hand, a face quietly swapped out. Pixel-level detection systems frequently miss these alterations because they are, in most technical respects, scanning a real photograph. The deception might occupy no more than one square inch of the frame.

"Every old method assumed the image was a record of something," van Ess says. "Generative media breaks that assumption at the root."

Henry Ajder, a deepfake researcher and AI adviser who has tracked synthetic media since 2018, takes the argument a step further. AI-generated content is no longer visibly artificial, he says—it is embedded seamlessly into the broader information environment. The volume of high-quality synthetic material now circulating online signals the end of the era when errors made fakes detectable. What follows is a landscape in which fabricated content looks entirely credible by default.

Detection tools, meanwhile, have their own considerable limitations. They are not truth machines. Even the most capable systems fail with enough frequency to matter, and most return only a confidence percentage without any explanation of the underlying reasoning. "Detection tools should never be used as a sole signal to determine action," Ajder cautions.

Five Steps Anyone Can Take to Verify Images

Until the infrastructure for large-scale content provenance exists, the responsibility for verification falls on individuals. Van Ess offers five practical steps—not foolproof guarantees, but meaningful ways to slow the spread of manipulated content.

1. Watch for the Hollywood Effect

If an image looks unnervingly cinematic—too dramatically lit, too perfectly composed, too symmetrical for a chaotic situation—treat that as a warning sign. Real crisis footage is rarely polished. If everyone in the frame looks ready for a close-up, something may not be right.

2. Run Multiple Reverse Image Searches

Google Lens, Yandex, and TinEye each index different sources and return different results. Using only one is insufficient. Critically, the absence of any match no longer confirms originality—it may mean the image has no photographic origin at all.

3. Examine the Margins, Not the Focal Point

Skip the obvious landmark or central figure and look instead at the background details: street signs, manhole covers, shadow angles, peripheral objects. These are the areas where inconsistencies tend to appear, because they are the parts that anyone generating a fake is least likely to scrutinize.

4. Use Detection Tools as Starting Points, Not Conclusions

A confidence score without supporting explanation is not evidence. Tools that trace where an image first appeared online or cross-reference it against fact-checker databases are substantially more useful than a single percentage rating. ImageWhisperer is one freely available tool that combines several of these signals.

5. Trace Back to Patient Zero

Follow an image to its earliest verifiable appearance online. Authentic material almost always arrives attached to a source—a named photographer, a witness, a specific location. Synthetic content tends to surface without friction: anonymous, already formatted for sharing, and curiously polished from the start.

The Long-Term Answer Is Provenance, Not Just Detection

Ajder, who has advised major companies including Adobe and Synthesia, argues that chasing fake content after the fact is an unwinnable race. The more durable solution is provenance—building systems capable of verifying where content originated rather than perpetually trying to identify what has been falsified. That infrastructure does not yet exist at scale.

Until it does, the burden does not disappear. It shifts to the people consuming content in real time. In an environment where synthetic media travels faster than any verification system can respond, the most meaningful defense available may be behavioral: a moment of hesitation before sharing. A few minutes of scrutiny in a system specifically designed to reward none.