Google’s latest attempt to integrate artificial intelligence (AI) into its flagship search engine has hit a snag, with users reporting a series of bizarre and potentially dangerous responses from the new “AI Overviews” feature.
These AI-generated summaries, designed to provide direct answers to search queries, have been caught spewing out erroneous and even harmful information.
TLDR
- Google’s new AI search feature, called “AI Overviews,” has been providing erratic and inaccurate responses to some user queries.
- Examples include suggesting using non-toxic glue to make cheese stick to pizza, recommending eating rocks for digestive health, and reinforcing conspiracy theories about Barack Obama’s religion.
- These errors, often termed “hallucinations,” seem to stem from the AI misinterpreting satirical sources like Reddit comments and The Onion as factual information.
- Google has acknowledged these issues, calling them “isolated examples” and “generally uncommon queries,” but states that most AI Overviews provide high-quality information.
- The company is using these examples to refine its systems and has taken action where “policy violations” were identified.
One of the most alarming examples involved the AI suggesting that users could mix non-toxic glue with cheese to make it stick better to pizza.
This recommendation appears to have originated from a satirical Reddit post, but the AI presented it as a legitimate cooking tip. Ingesting glue, even if labeled as non-toxic, can be extremely hazardous to one’s health.
Another concerning response advised users to eat at least one small rock per day, citing fictitious “UC Berkeley geologists” as the source.
The AI claimed that rocks contain essential vitamins and minerals for digestive health, despite the obvious risks of consuming non-food items like rocks.
Beyond potentially dangerous advice, the AI Overviews have also been caught reinforcing conspiracy theories and spreading misinformation. In one instance, the feature claimed that former President Barack Obama is Muslim, perpetuating a long-debunked conspiracy theory that persisted throughout his presidency despite his open identification as a Christian.
Google has acknowledged these issues, describing them as “isolated examples” and “generally uncommon queries.”
The company insists that the “vast majority of AI Overviews provide high-quality information,” with links for users to explore further on the web.
However, the prevalence of these “hallucinations” – instances where the AI generates nonsensical or factually incorrect responses – has raised concerns about the reliability of the feature and the potential for it to spread misinformation on a massive scale.
The errors appear to stem from the AI’s inability to distinguish satire and fictional content from factual information. Many of the erroneous responses can be traced back to sources like the satirical news website The Onion or humorous Reddit comments, which the AI has mistakenly interpreted as legitimate sources.
While Google has stated that it is taking action to address policy violations and refine its systems, the widespread nature of these issues has called into question the readiness of the AI Overviews feature for a broad rollout.
As AI-powered search and information retrieval tools become more prevalent, ensuring the accuracy and trustworthiness of these systems will be crucial.
Incidents like these underscore the potential pitfalls of relying too heavily on AI without proper safeguards and fact-checking mechanisms in place.