Google's AI Overviews, a feature integrated into its search engine designed to provide summarized information, have faced scrutiny regarding the accuracy of health-related content. Reports and studies have highlighted instances where the AI summaries presented information described by experts as inaccurate, misleading, or lacking crucial context.
Google has acknowledged "oddities and errors" and stated its commitment to improving the quality of AI Overviews, particularly for health topics. The company is implementing actions under its policies and recently discontinued a separate, crowdsourced health advice feature called "What People Suggest."
Introduction to Google AI Overviews
Google launched its AI Overviews feature in May 2024 for users in the United States, with plans for expansion to over 200 countries and 40 languages by July 2025. This functionality delivers information summaries positioned above traditional search results.
Reports indicate that these AI-generated summaries are utilized by approximately 2 billion people monthly. Google's Chief Executive, Sundar Pichai, has stated that AI Overviews were "performing well," and the company aims for the feature to be "helpful" and "reliable."
Concerns Over Health Information Accuracy
Investigations and expert analyses have identified multiple instances of potentially inaccurate health information provided by AI Overviews:
- Pancreatic Cancer Advice: An AI summary advised individuals with pancreatic cancer to avoid high-fat foods. Medical experts, including Pancreatic Cancer UK, stated this contradicts standard medical advice and could compromise patients' ability to receive necessary treatments, potentially increasing mortality risk.
- Liver Function Tests: Information regarding liver function tests was described as inaccurate, with inconsistent or incorrect 'normal' ranges. The British Liver Trust noted this could lead individuals with serious liver conditions to misinterpret their health status and delay follow-up care.
- Women's Cancer Screening: Information related to women's cancer tests, specifically for vaginal cancer, was found to be incorrect. A Pap test was incorrectly identified as a diagnostic tool for vaginal cancer. The Eve Appeal expressed concern that this could deter individuals from seeking appropriate medical evaluation for symptoms.
- Mental Health Information: AI Overviews related to mental health conditions, including psychosis and eating disorders, presented information described as potentially misleading or lacking crucial context. Mind, a mental health charity, noted that advice offered could lead individuals to avoid professional help.
Several health organizations and professionals, including the Patient Information Forum, Marie Curie, and British Liver Trust, have voiced concerns regarding the potential for inaccurate health information to be prominently displayed in search results.
Source Reliability and the Role of YouTube
A study conducted by SE Ranking, analyzing over 50,000 health queries in German from Berlin, indicated that Google's AI Overviews cited YouTube more frequently than dedicated medical websites for health-related questions.
Study Findings
The study identified YouTube as the most cited source, accounting for 4.43% of all AI Overview citations. No hospital network, government health portal, medical association, or academic institution reached comparable citation numbers. Researchers expressed concern, highlighting that YouTube is a general video platform where content can be uploaded by individuals with or without medical training.
Expert Commentary
Hannah van Kolfschooten, an AI, health, and law researcher, suggested the heavy reliance on YouTube implies that "visibility and popularity, rather than medical reliability, is the central driver for health knowledge."
She characterized the findings as "empirical evidence that the risks posed by AI Overviews for health are structural, not anecdotal."
Google's Response
Google stated that AI Overviews are designed to surface high-quality content from reputable sources, regardless of format, and noted that credible health authorities and licensed medical professionals create content on YouTube. The company also suggested the study's findings, based on German-language queries, might not be generalizable to other regions. Google indicated that 96% of the 25 most cited YouTube videos in the study were from medical channels, though researchers noted these constitute less than 1% of all YouTube links cited for health.
Study Limitations
Researchers acknowledged the study was a one-time snapshot from December 2025 (likely a typo, assumed to be recent past) using German-language queries, and results could vary over time, by region, and based on question phrasing.
Broader Implications and Expert Concerns
Experts have raised additional concerns about the nature of AI Overviews:
- Single Authoritative Answer: Unlike traditional search results that present a range of sources, AI Overviews provide a single, AI-generated answer, potentially reducing users' opportunities for critical assessment.
- Reduced Critical Evaluation: Associate Professor Nicole Gross suggested users are less likely to research further once an AI summary appears.
- Evidence Distinction: Concerns exist about the AI's potential failure to distinguish between strong and weak evidence (e.g., randomized trials versus observational studies) and the omission of crucial caveats.
- Fluctuating Answers: AI Overviews' responses can fluctuate as the AI evolves, even if scientific understanding remains unchanged.
The most significant concern articulated is that erroneous or dangerous medical information from AI Overviews could translate into patient practices, potentially posing life-threatening risks.
Google's Responses and Actions
Google has provided several statements and taken actions in response to the identified issues:
- Initial Stance: Google initially stated that many of the health examples provided were "incomplete screenshots" and maintained that its AI Overviews often link "to well-known, reputable sources and recommend seeking out expert advice."
- Commitment to Quality: The company affirmed its significant investment in the quality of AI Overviews, particularly for health topics, and maintained that the majority provide accurate information. Google stated that the accuracy rate of AI Overviews is consistent with other search features, such as featured snippets.
- Removals and Improvements: Google acknowledged "oddities and errors" resulting from misinterpretation of web page language. The company subsequently removed some of the AI Overviews for health queries highlighted by investigations. Google has stated it takes action under its policies when AI Overviews misinterpret web content or miss context, and it commits to using errors to improve its systems.
- Dynamic Links: Google added that links within AI Overviews are dynamic and adapt based on relevance and timeliness.
Discontinuation of "What People Suggest" Feature
Amid the scrutiny surrounding AI Overviews, Google has removed a separate AI search feature called "What People Suggest." This feature, launched in March of the previous year, was designed to offer crowdsourced health advice, organizing perspectives from online discussions into themes.
Google confirmed the removal as part of a "broader simplification" of its search page, clarifying that the decision was not related to the quality or safety of the feature itself. Google's then-chief health officer, Karen DeSalvo, had previously stated that while users seek expert medical information, they also value insights from others with comparable experiences.
When questioned about public communication regarding the removal, Google referenced a blog post from November of the previous year by a Google search advocate, which did not explicitly mention "What People Suggest."
Google is scheduled to hold its next "The Check Up" event, where Chief Health Officer Michael Howell and other staff are expected to discuss new AI research, technological innovations, and partnerships addressing global health challenges. The broader context includes ongoing discussions about the reliability of AI-generated data, with similar issues having been raised concerning AI summaries of news content and financial advice from AI chatbots.