Grok spews misinformation about deadly Australia shooting
Elon Musk's AI chatbot Grok churned out misinformation about Australia's Bondi Beach mass shooting, misidentifying a key figure who saved lives and falsely claiming that a victim staged his injuries, researchers said Tuesday.
The episode highlights how chatbots often deliver confident yet false responses during fast-developing news events, fueling information chaos as online platforms scale back human fact-checking and content moderation.
The attack during a Jewish festival on Sunday in the beach suburb of Sydney was one of Australia's worst mass shootings, leaving 15 people dead and dozens wounded.
Among the falsehoods Grok circulated was its repeated misidentification of Ahmed al Ahmed, who was widely hailed as a Bondi Beach hero after he risked his life to wrest a gun from one of the attackers.
In one post reviewed by AFP, Grok claimed the verified clip of the confrontation was "an old viral video of a man climbing a palm tree in a parking lot, possibly to trim it," suggesting it "may be staged."
Citing credible media sources such as CNN, Grok separately misidentified an image of Ahmed as that of an Israeli hostage held by the Palestinian militant group Hamas for more than 700 days.
When asked about another scene from the attack, Grok incorrectly claimed it was footage from tropical "cyclone Alfred," which generated heavy weather across the Australian coast earlier this year.
Only after another user pressed the chatbot to re-evaluate its answer did Grok backpedal and acknowledge the footage was from the Bondi Beach shooting.
When reached for comment by AFP, Grok-developer xAI responded only with an auto generated reply: "Legacy Media Lies."
- 'Crisis actor' -
The misinformation underscores what researchers say is the unreliability of AI chatbots as a fact-checking tool.
Internet users are increasingly turning to chatbots to verify images in real time, but the tools often fail, raising questions about their visual debunking capabilities.
In the aftermath of the Sydney attack, online users circulated an authentic image of one of the survivors, falsely claiming he was a "crisis actor," disinformation watchdog NewsGuard reported.
Crisis actor is a derogatory label used by conspiracy theorists to allege that someone is deceiving the public -- feigning injuries or death -- while posing as a victim of a tragic event.
Online users questioned the authenticity of a photo of the survivor with blood on his face, sharing a response from Grok that falsely labeled the image as "staged" or "fake."
NewsGuard also reported that some users circulated an AI image -- created with Google's Nano Banana Pro model -- depicting red paint being applied on the survivor's face to pass off as blood, seemingly to bolster the false claim that he was a crisis actor.
Researchers say AI models can be useful to professional fact-checkers, helping to quickly geolocate images and spot visual clues to establish authenticity.
But they caution that they cannot replace the work of trained human fact-checkers.
In polarized societies, however, professional fact-checkers often face criticism from conservatives of liberal bias, a charge they reject.
AFP currently works in 26 languages with Meta's fact-checking program, including in Asia, Latin America, and the European Union.
Z.Jin--SG