When asked if cats had been on the moon, Google’s revamped search engine, driven by artificial intelligence, confidently asserted: “Yes, astronauts have met cats on the moon, played with them, and provided care,” and further claimed that Neil Armstrong’s famous words, “One small step for man,” referred to a cat’s step. None of these statements are true.
This incident is part of a series of errors—some amusing, others dangerously misleading—that have emerged since Google integrated AI overviews into its search results. This change has alarmed experts who warn that such AI-generated summaries could perpetuate biases and spread misinformation, potentially endangering people seeking urgent help.
Melanie Mitchell, an AI researcher at the Santa Fe Institute, highlighted a serious flaw when Google incorrectly claimed that Barack Obama was a Muslim president, citing a source that did not support the claim. Mitchell emphasized that the AI system failed to verify the accuracy of the citation, showcasing its unreliability.
Google responded by stating it is taking swift action to correct errors that violate its content policies and is working on broader improvements. Despite these assurances, the randomness of AI language models makes errors hard to reproduce and correct. These models often generate responses by predicting the best-fitting words based on their training data, a process that can lead to “hallucinations”—a term for fabricated information.
While Google’s AI provided an impressively thorough response to a question about snake bites, Robert Espinoza, a biology professor at California State University, noted that any small error in such critical information could have severe consequences, especially in emergencies. Emily M. Bender, a linguistics professor at the University of Washington, expressed concerns about people trusting potentially flawed AI-generated answers, particularly under stress.
Bender has previously cautioned Google against using AI language models as authoritative sources, warning that such systems could reflect the biases in their training data. She highlighted the risk of misinformation confirming existing biases, making it harder to detect false information.
Moreover, Bender worries that relying on AI for information retrieval could diminish the serendipitous discovery of knowledge, online literacy, and meaningful connections in forums. The shift could also disrupt the revenue flow for websites that rely on traffic driven by traditional search results.
Competitors, such as Perplexity AI, have noted the apparent rush in Google’s AI feature rollout, citing numerous avoidable errors. The pressure on Google to enhance its AI capabilities has intensified with the rise of competitors like ChatGPT and Perplexity AI, challenging the tech giant to balance innovation with reliability.