ChatGPT Health: What to Know Before Using AI for Medical Advice
OpenAI's ChatGPT Health connects medical records and wellness apps to provide personalized health advice. Here's what the first independent safety study found and what you need to know before using it.

ChatGPT Health: What to Know Before Using AI for Medical Advice
In January 2026, OpenAI introduced ChatGPT Health, a dedicated feature that connects patient portals, Apple Health, and wellness apps to provide personalized health insights. More than 40 million people now ask ChatGPT health-related questions daily, according to Axios reporting.
But the first independent safety evaluation, published in Nature Medicine, raises serious concerns. Here's what you need to know.
What Is ChatGPT Health?
ChatGPT Health is a sandboxed tab within ChatGPT that allows users to:
- •Connect patient portals from healthcare providers
- •Link Apple Health and popular wellness apps
- •Upload medical records and lab results
- •Ask health questions grounded in personal medical data
The feature keeps health conversations separate from other chats and applies additional privacy protections. OpenAI does not use health data to train its models.
Currently, ChatGPT Health is available via a waiting list in most regions. It's not available in the UK, Switzerland, or EU due to regulatory requirements.
The Promise: Better Than a Google Search
Some doctors see potential in AI-powered health guidance.
Dr. Robert Wachter, a medical technology expert at UCSF, notes that AI platforms can provide more personalized information than generic web searches:
> "The alternative often is nothing, or the patient winging it. If you use these tools responsibly, I think you can get useful information."
The advantage is context. Unlike a search engine, ChatGPT Health can reference your prescriptions, age, doctor's notes, and wearable data when answering questions.
The Problem: First Safety Study Results
The February 2026 Nature Medicine study tested ChatGPT Health with 60 realistic patient scenarios, covering conditions from mild illnesses to emergencies. Three independent doctors agreed on the appropriate level of care for each case.
The results were concerning:
Emergency Under-Triage
- •51.6% of emergency cases were told to stay home or book a routine appointment
- •In one asthma scenario, the AI advised waiting despite identifying early respiratory failure signs
- •A suffocating woman was sent to a future appointment 84% of the time
Dr. Ashwin Ramaswamy, the study's lead author from Mount Sinai:
> "We wanted to answer the most basic safety question: if someone is having a real medical emergency and asks ChatGPT Health what to do, will it tell them to go to the emergency department?"
False Alarms
- •64.8% of completely safe individuals were told to seek immediate medical care
Alex Ruani, a health misinformation researcher at UCL:
> "If you're experiencing respiratory failure or diabetic ketoacidosis, you have a 50/50 chance of this AI telling you it's not a big deal. That reassurance could cost them their life."
Suicide Ideation Guardrail Failure
The study found that crisis intervention banners (linking to suicide help) appeared reliably when patients described symptoms alone—but vanished when normal lab results were added.
Ramaswamy:
> "A crisis guardrail that depends on whether you mentioned your labs is not ready, and it's arguably more dangerous than having no guardrail at all."
Privacy Considerations
Anything shared with an AI company isn't protected by HIPAA, the federal law governing medical information.
Dr. Lloyd Minor, dean of Stanford's medical school:
> "When someone is uploading their medical chart into a large language model, that is very different than handing it to a new doctor. Consumers need to understand they're completely different privacy standards."
Both OpenAI and Anthropic (which offers similar features in Claude) say:
- •Health information is kept separate from other data
- •Additional privacy protections apply
- •Users must opt in and can disconnect anytime
What Experts Recommend
1. Skip AI for Emergency Symptoms
If you experience shortness of breath, chest pain, severe headache, or other urgent symptoms—seek immediate medical attention. Don't ask a chatbot first.
2. Maintain Healthy Skepticism
Dr. Minor:
> "If you're talking about a major medical decision, you should never be relying just on what you're getting out of a large language model."
3. Provide Context for Better Results
If you use these tools, include as much relevant detail as possible—prescriptions, age, medical history. Context improves response quality.
4. Get a Second AI Opinion
Dr. Wachter recommends cross-checking:
> "I will sometimes put information into ChatGPT and information into Gemini. And when they both agree, I feel a bit more secure that that's the right answer."
The Oxford Study: Communication Breakdown
A separate 1,300-participant Oxford University study found that people using AI chatbots to research hypothetical health conditions didn't make better decisions than those using online searches or personal judgment.
The problem wasn't medical knowledge—AI correctly identified conditions 95% of the time when given complete information. The issue was interaction:
- •Users often didn't provide necessary information
- •AI responses mixed good and bad advice
- •People struggled to distinguish between the two
The study used earlier chatbot versions, not ChatGPT Health specifically.
OpenAI's Response
An OpenAI spokesperson said the Nature Medicine study "did not reflect how people typically use ChatGPT Health in real life." The model is continuously updated and refined.
The company maintains that ChatGPT Health:
- •Is not a substitute for professional care
- •Should not be used for diagnosis
- •Is designed to help summarize and explain medical information
Who Should Use ChatGPT Health?
Potentially Useful For
- •Summarizing complex lab results
- •Preparing questions before a doctor's visit
- •Understanding medical terminology
- •Tracking health trends across multiple data sources
Avoid For
- •Any symptoms that could be urgent
- •Mental health crises
- •Diagnosis decisions
- •Replacing professional medical advice
FAQs
Is ChatGPT Health free?
It's part of ChatGPT Plus/Pro subscriptions. Check current OpenAI pricing for details.
Is my data used to train AI models?
OpenAI says health data is not used for training. However, standard HIPAA protections don't apply.
Can I delete my health data?
Yes, you can disconnect health sources at any time.
Should I use AI instead of seeing a doctor?
No. OpenAI and Anthropic explicitly state their tools are not substitutes for professional care.
Bottom Line
ChatGPT Health represents a significant step toward personalized AI health guidance—but the first independent safety evaluation suggests it's not ready for critical decisions. The technology shows promise for understanding medical information, but users should approach it with caution and never rely on it for emergencies.
As Alex Ruani notes: "A plausible risk of harm is enough to justify stronger safeguards and independent oversight."
Sources:
Share this article
About NeuralStackly Team
Expert researcher and writer at NeuralStackly, dedicated to finding the best AI tools to boost productivity and business growth.
View all postsRelated Articles
Continue reading with these related posts

Agentic Commerce Goes Live: DBS Bank and Visa Enable AI Agents to Make Purchases
Singapore's DBS Bank becomes the first in Asia Pacific to pilot Visa Intelligent Commerce, letting AI agents execute real credit card transactions. Here's what it means for the ...

OpenAI Raises $110 Billion in Historic Funding Round Led by Amazon, Nvidia, and SoftBank
OpenAI has closed a record-breaking $110 billion funding round with investments from Amazon ($50B), Nvidia ($30B), and SoftBank ($30B), reaching a $730B valuation. This marks th...

Chinese Open-Source AI Models Overtake US in Downloads: What Developers Need to Know
Alibaba's Qwen has become the most downloaded AI model series on Hugging Face, surpassing Meta's Llama. MIT research confirms Chinese open-source models now lead in total downlo...