ChatGPT is increasing its presence within the well being care realm. OpenAI stated Wednesday that its standard AI chatbot will start rolling out ChatGPT Health, a brand new tab devoted to addressing all of your medical inquiries. The objective of this new tab is to centralize all of your medical data and supply a non-public space in your wellness points.
Searching for solutions a few plethora of well being points is a high use for the chatbot. In line with OpenAI, “tons of of thousands and thousands of individuals” register to ChatGPT each week to ask quite a lot of well being and wellness questions. Moreover, ChatGPT Well being (at the moment in beta testing) will encourage you to attach any wellness apps you additionally use, similar to Apple Health and MyFitnessPal, leading to a extra linked expertise with extra details about you to attract from.
Do not miss any of our unbiased tech content material and lab-based critiques. Add CNET as a most well-liked Google supply.
What this implies in your privateness
Online privacy, particularly within the age of AI, is a major concern, and this announcement raises a variety of questions concerning how your private well being knowledge shall be used and the safeguards that shall be applied to maintain delicate info safe — particularly with the proliferation of data breaches and data brokers.
OpenAI’s new ChatGPT Well being tab is a devoted hub for all of your medical inquiries.
“The US does not have a general-purpose privateness legislation, and HIPAA solely protects knowledge held by sure folks like well being care suppliers and insurance coverage firms,” Andrew Crawford, senior counsel for privateness and knowledge on the Middle for Democracy and Know-how, stated in an emailed assertion.
He continued: “The current announcement by OpenAI introducing ChatGPT Well being signifies that various firms not sure by HIPAA’s privateness protections shall be accumulating, sharing and utilizing folks’s well being knowledge. And because it’s as much as every firm to set the principles for the way well being knowledge is collected, used, shared and saved, insufficient knowledge protections and insurance policies can put delicate well being info in actual hazard.”
OpenAI says the brand new tab may have a separate chat historical past and a reminiscence function that may preserve your well being chat historical past separate from the remainder of your ChatGPT utilization.
Additional protections, similar to encryption and multifactor authentication, will defend your knowledge and preserve it safe, the corporate says. Well being conversations will not be used to coach the chatbot, in response to the corporate.
How the tab will work in idea
Privateness points apart, one other concern is how folks intend to make use of ChatGPT Well being. OpenAI’s weblog publish states the service “will not be meant for prognosis or remedy.”
The slope is slippery right here. In August 2025, a man was hospitalized after allegedly being suggested by the AI chatbot to exchange salt in his weight loss plan with sodium bromide. There are different examples of AI providing incorrect and potentially harmful advice to people, resulting in hospitalization.
OpenAI’s announcement additionally does not contact on mental health concerns, however a weblog publish from October 2025 says the corporate is working to strengthen its responses in delicate conversations. Whether or not these psychological well being guardrails shall be sufficient to keep people safe stays to be seen.
ChatGPT Well being is meant for use for health, diet and medical info, in response to OpenAI. It is also not aiming to exchange psychological well being care, and the chatbot has been skilled to direct folks asking for psychological well being assist to real-world sources, like disaster hotlines.
OpenAI says that after working with greater than 170 psychological well being specialists, it has improved ChatGPT’s means to reply to psychological well being misery and may de-escalate such conversations. It additionally says its GPT-5 mannequin hallucinates as much as eight instances much less typically than its earlier fashions, in response to some benchmarking assessments.
For those who’re focused on ChatGPT Well being, you may be a part of a waitlist, because the tab is not but stay.
(Disclosure: Ziff Davis, CNET’s guardian firm, in April filed a lawsuit in opposition to OpenAI, alleging it infringed Ziff Davis copyrights in coaching and working its AI programs.)

