Back
Technology

AI Integration in Healthcare: Diagnostic Advancements, Access Programs, and Public Adoption

View source

The Expanding Role of AI in U.S. Healthcare

A series of recent developments illustrates the expanding role of artificial intelligence in the U.S. healthcare system, encompassing advanced diagnostic capabilities, new telehealth programs to address provider shortages, and growing public use of AI tools for health information.

A study found an AI model matched or outperformed physicians in diagnostic accuracy, while a major hospital network launched an AI-supported primary care program. Concurrently, a Gallup poll indicated approximately one-quarter of U.S. adults have used AI for health advice, and federal officials have proposed AI solutions for rural healthcare challenges.

Diagnostic Capabilities and Research

AI Model Performance in Emergency Medicine

A study published in the journal Science tested an AI reasoning model developed by OpenAI on real-world emergency department cases. The research, conducted by Harvard Medical School and Beth Israel Deaconess Medical Center, found that the AI matched or outperformed physicians and the earlier GPT-4 model in diagnosing patients and making care management decisions. The AI used only electronic health records and the same limited information available to physicians at the time of diagnosis.

In experiments involving actual patient cases from Beth Israel Deaconess Medical Center, case reports from the New England Journal of Medicine, and clinical vignettes, the AI outperformed two experienced physicians in diagnostic accuracy at three stages: triage, emergency department care, and hospital admission. The model also performed well against established benchmarks for generating differential diagnoses.

Caveats and Limitations

The study relied solely on text data; real-world clinical practice involves images, sounds, and nonverbal cues. The emergency department represents only a small portion of patient care, and the AI might not perform as well with prolonged hospital stays. Authors emphasized that the findings do not support replacing physicians with AI and that rigorous forward-looking trials are needed to assess clinical impact.

AI in Mental Healthcare

AI tools are being integrated into mental healthcare by health systems and independent therapists. Current adoption has focused primarily on administrative tasks, such as documentation, billing, and updating electronic health records. Companies like Limbic provide AI assistants for large health systems, offering intake and patient support services, including chatbots trained in cognitive behavioral therapy (CBT) skills.

Dr. John Torous of Beth Israel Deaconess Medical Center indicated that widespread clinical use of AI in mental health is not yet common due to lack of testing and high implementation costs. He predicted a future "hybrid" model where AI assists with therapy homework and skill practice while humans provide therapy. Vaile Wright of the American Psychological Association stated that no AI solutions can fully replace human-driven psychotherapy.

Workforce Concerns

In March, 2,400 mental health care providers for Kaiser Permanente (KP) in Northern California and the Central Valley participated in a 24-hour strike. The strike addressed concerns about changes to the triage system, among other issues. Ilana Marcucci-Morris, a licensed clinical social worker at KP, stated that a 10-15 minute screening is now conducted by unlicensed lay operators or via E-visits. KP stated its use of AI does not replace clinical expertise and confirmed it is evaluating AI tools from Limbic, though these are not currently in use.

AI-Supported Telehealth Programs

Mass General Brigham Care Connect

In September, Mass General Brigham (MGB) launched Care Connect, an AI-supported telehealth program designed to address a primary care provider shortage in Massachusetts. The program allows patients to interact with an AI agent via an app, which compiles a summary for a primary care doctor who then provides care through video appointments, often within one to two days.

Care Connect operates 24/7 with 12 remotely located physicians. It is designed to address common urgent care needs, including colds, rashes, sprains, and mild to moderate mental health concerns, as well as issues related to chronic diseases. The AI tool suggests diagnoses and treatment plans to the doctors. MGB has committed $400 million over five years to primary care services, including the Care Connect initiative. By February, MGB plans to extend services to all insured residents of Massachusetts and New Hampshire.

Provider and Patient Perspectives

Tammy MacDonald reported positive experiences with Care Connect, citing its convenience and quick appointment availability. Some primary care doctors within MGB raised concerns that the investment in AI should be directed towards increasing salaries and retaining primary care staff, and that the program could diminish access to in-person care over time.

Ron Walls, MGB's chief operating officer, stated that Care Connect is part of a broader strategy to address the primary care capacity crisis, which includes retaining current physicians, recruiting new ones, and implementing other AI tools. Helen Ireland, a primary care physician managing Care Connect, believes a segment of patients will value the 24/7 virtual model.

Public Adoption of AI for Health Information

Survey Findings

A Gallup poll published in late 2025 found that approximately one-quarter of U.S. adults reported using an AI tool for health information or advice in the 30 days prior to the survey. This finding is supported by at least three other recent surveys, including a KFF poll from late February and a Pew Research Center survey from October.

According to the Gallup survey, about 7 in 10 U.S. adults who used AI for health research cited reasons including wanting quick answers, seeking additional information, or curiosity. Majorities reported using AI for research before seeing a doctor or after an appointment.

Demographics and Motivations

A KFF survey found that younger adults and lower-income individuals were more likely to use AI tools for health information due to cost or access barriers to professional care. The Gallup study indicated a portion of users turned to AI because accessing healthcare was too expensive or inconvenient. Specific reasons cited included:

  • Wanting help outside normal business hours (approximately 4 in 10)
  • Not wanting to pay for a doctor's visit (approximately 3 in 10)
  • Not having time for an appointment, past negative experiences with providers, or embarrassment (approximately 2 in 10 for each)

Trust and Privacy Concerns

The Gallup poll reported that about one-third of recent AI health users said they "strongly" or "somewhat" trust the accuracy of the information, about 34% distrusted it, and 33% neither trusted nor distrusted it. A KFF poll found about three-quarters of U.S. adults expressed concern about the privacy of personal medical information provided to AI tools.

Dr. Karandeep Singh of UC San Diego Health described AI tools as an "upgraded version" of traditional web searches for health information. Dr. Bobby Mukkamala, president of the American Medical Association, stated that AI should be considered a tool and not a substitute for medical care, noting it is "an assistant but not an expert."

Broader Context

Surveys indicate that the use of AI has not replaced traditional care-seeking for most individuals. A KFF poll found about 8 in 10 U.S. adults sought health information from a doctor or healthcare professional in the past year, compared to about 3 in 10 for AI tools. A Pew Research Center survey found about 2 in 10 U.S. adults get health information at least sometimes from AI chatbots, while about 85% said the same about healthcare providers.

Federal Proposals for Rural Healthcare

CMS Proposal

Dr. Mehmet Oz, head of the Centers for Medicare and Medicaid Services (CMS), proposed using AI to address the rural healthcare crisis in the United States. This proposal is part of the Trump administration's $50 billion plan to modernize rural healthcare. The plan includes deploying digital avatars for basic medical interviews, robotic systems for remote diagnostics, and drones for medication delivery.

In a statement, CMS indicated that Oz's comments emphasized the need to "responsibly explore tools" to extend the capabilities of licensed clinicians, not to replace them entirely. CMS stated support for AI-enabled tools when they are evidence-based, patient-centered, and utilized under appropriate clinical oversight.

Context and Criticism

Oz's suggestions follow federal Medicaid spending reductions under the One Big Beautiful Bill Act. Between 2005 and early 2024, over 190 rural hospitals closed due to financial pressures. A 2024 CDC report found that rural residents are more likely to die early from five leading causes compared to urban populations.

Carrie Henning-Smith of the University of Minnesota's Rural Health Research Center criticized the proposal, arguing that AI avatars would remove essential human connection from healthcare and that implementing unproven technology in underserved populations could exacerbate disparities. She highlighted logistical challenges including unreliable broadband, low health literacy, and fragile transportation systems in rural areas.

Some health technology leaders, such as Matt Faustman of Honey Health, argued that AI tools could assist rural communities by automating administrative burdens, potentially allowing clinicians to prioritize patient care.

Risks and Reported Incidents

Inaccurate Advice

Research published in Nature Medicine found that participants using AI chatbots for medical scenarios correctly identified hypothetical conditions approximately one-third of the time. Only 43% made correct decisions regarding next steps, such as seeking emergency care. Another study found that AI bots "under-triaged" 52% of emergency medical cases, downplaying the seriousness of ailments.

Dr. Robert Wachter of UC San Francisco described a case where AI incorrectly advised a patient to use ivermectin for testicular cancer. A documented instance involved a 60-year-old man who experienced paranoia and hallucinations after consuming sodium bromide, reportedly after consulting ChatGPT about salt intake reduction.

Privacy Incidents

Last summer, private ChatGPT conversations were discovered indexed on a public website without users' knowledge, highlighting potential privacy risks.

Health-Focused AI Chatbots

New Products

In January, OpenAI launched ChatGPT Health, a program intended to analyze medical records, wellness app data, and wearable device information to provide health and medical answers. Anthropic offers similar features through its Claude chatbot for some users. Both companies state that their large language models are not substitutes for professional medical care and should not be used for diagnosis.