How AI-Generated Disinformation Is Overwhelming X During the Iran Conflict
Technology

How AI-Generated Disinformation Is Overwhelming X During the Iran Conflict

Fake AI images and videos about the Iran war are flooding X, with Grok spreading false information and millions viewing fabricated content before removal.

By Sophia Bennett5 min read

AI Disinformation Runs Rampant on X Amid Iran Conflict

Since the United States and Israel launched military operations against Iran on February 28, social media platform X has been overwhelmed by a torrent of disinformation — much of it powered by artificial intelligence. What began as a flood of misleading content has since evolved into a sophisticated, AI-fueled misinformation campaign that is making it increasingly difficult for users to distinguish fact from fiction.

Grok Gets It Wrong — Then Makes It Worse

X's own AI chatbot, Grok, has been at the center of the problem. The tool repeatedly misidentified the location and timestamp of a video originally posted by an Iranian state-owned media outlet, then compounded the error by generating an AI image in an apparent attempt to support its incorrect claims. Independent conflict analyst Hagin, who was verifying content on the platform, called the chatbot's behavior "AI slop of destruction" — a telling sign of just how far removed from reality the platform has drifted.

The Scale of Fake AI Content on X

The volume and sophistication of AI-generated disinformation has surged in recent days. Fabricated videos and images are being circulated by paid accounts holding blue check marks, as well as by Iranian officials attempting to portray inflated accounts of wartime destruction.

Some notable examples include:

  • AI-generated footage of a high-rise building in Bahrain engulfed in flames, shared by Iranian state media on March 2.
  • A fabricated image of a U.S. B-2 stealth bomber being shot down by Iran, with American troops shown as prisoners — viewed more than one million times before deletion.
  • Images depicting Delta Force soldiers allegedly captured by Iranian authorities, which racked up over five million views before being removed.
  • A cave-based missile manufacturing video, widely recognized as unrealistic but still viewed more than one million times across multiple accounts.

Antisemitic Narratives Pushed Through AI Content

Researchers at the Institute for Strategic Dialogue (ISD) have identified a coordinated pro-regime propaganda network on X that is leveraging AI-generated posts to spread overtly antisemitic content. These posts depict Orthodox Jewish individuals leading American soldiers into battle or celebrating U.S. casualties — a deliberate effort to weaponize AI imagery for ideological purposes.

Accounts within the same network also shared a fabricated video falsely showing young girls walking past President Donald Trump in a state of undress. The post garnered an estimated 6.8 million views before being taken down, though it continues to circulate on the platform.

A Turning Point for AI-Fueled Fake News

"What is particularly unique about this war is the dramatic uptick in AI-generated content I find myself debunking," Hagin told WIRED. "This is likely due to AI being advanced enough to fool journalists, and the ease with which users can create this AI slop with zero consequences. The longer we go without regulations against AI abuse, the more harm will be caused."

The accessibility of modern AI image and video generation tools has dramatically lowered the barrier for producing convincing fake content, creating a perfect storm of misinformation during an active military conflict.

X's Response Falls Short

In response to the surge of AI-generated conflict footage, X announced it would temporarily demonetize blue-check accounts that post unlabeled AI videos depicting armed conflict. However, the platform has not disclosed how many accounts have actually been penalized under this policy. Notably, several Iranian officials had been paying for X's premium subscription — granting them blue check marks, increased visibility, and monetization eligibility.

Traditional Disinformation Persists Alongside AI Fakes

Beyond AI-generated content, conventional disinformation has also continued to spread. A particularly troubling case involves the attack on a primary school in Minab, Iran, on February 28, which killed over 168 people — 110 of them children. Pro-Trump accounts have repurposed unrelated conflict footage to falsely suggest the Iranian government struck its own school. In reality, footage verified by The New York Times shows a Tomahawk cruise missile — a weapon used exclusively by the United States in this conflict — hitting a naval base adjacent to the school.

Meta Also Under Fire for AI Labeling Failures

X is not alone in facing scrutiny. On Tuesday, Meta's Oversight Board criticized the company's approach to labeling AI-generated content, stating that its current framework is "neither robust nor comprehensive enough to handle the scale and speed of AI-generated misinformation, particularly during crises and conflicts." Meta acknowledged the board's findings in an online statement.

Detection Tools Are Struggling to Keep Up

"As AI-generated images and videos are increasingly sophisticated, users might not put into question visuals that are pushed as evidence to support pro-Iran claims when they look so real," said Isis Blachez, an analyst with media watchdog NewsGuard. "AI detection tools are not consistently successful at recognizing AI content," she added, highlighting a critical gap in the current infrastructure for combating digital disinformation.

As AI technology continues to advance faster than regulatory frameworks can adapt, experts warn that the line between verifiable reality and manufactured fiction will only grow thinner — with potentially dangerous consequences for public understanding of global conflicts.