Grok, an AI chatbot developed by xAI, has faced criticism for its performance, particularly in the wake of a mass shooting at Bondi Beach in Australia. Following the incident, the chatbot incorrectly identified Ahmed al Ahmed—a 43-year-old who intervened to disarm one of the shooters—as someone else altogether. It misattributed a verified video of his actions to unrelated content, including an outdated viral video of a man climbing a tree.
While Ahmed has received accolades for his bravery, some narratives have emerged that question or negate his role. A fabricated news site, appearing to be generated by AI, featured a fictional character, Edward Crabtree, erroneously described as the person who disarmed the attacker. Grok picked up this misinformation and shared it on social media platform X.
Additionally, Grok made further erroneous claims, suggesting that images of Ahmed depicted an Israeli hostage held by Hamas. It also misrepresented video footage from the scene as originating from Currumbin Beach during Cyclone Alfred.
Overall, there seems to be a pattern of difficulty for Grok in accurately processing and responding to queries. For example, when asked about Oracle’s financial struggles, it instead provided details regarding the Bondi Beach shooting. Similarly, in response to a question regarding a UK police operation, it initially stated the current date before offering polling data for Kamala Harris. This incident raises questions about the chatbot’s reliability and its ability to comprehend complex information in real-time situations.
Source: https://www.theverge.com/news/844443/grok-misinformation-bondi-beach-shooting

