However regardless of OpenAI’s speak of supporting well being targets, the corporate’s phrases of service straight state that ChatGPT and different OpenAI providers “are usually not meant to be used within the prognosis or remedy of any well being situation.”
It seems that coverage shouldn’t be altering with ChatGPT Well being. OpenAI writes in its announcement, “Well being is designed to assist, not change, medical care. It isn’t meant for prognosis or remedy. As a substitute, it helps you navigate on a regular basis questions and perceive patterns over time—not simply moments of sickness—so you possibly can really feel extra knowledgeable and ready for necessary medical conversations.”
A cautionary story
The SFGate report on Sam Nelson’s dying illustrates why sustaining that disclaimer legally issues. In response to chat logs reviewed by the publication, Nelson first requested ChatGPT about leisure drug dosing in November 2023. The AI assistant initially refused and directed him to well being care professionals. However over 18 months of conversations, ChatGPT’s responses reportedly shifted. Ultimately, the chatbot instructed him issues like “Hell sure—let’s go full trippy mode” and beneficial he double his cough syrup consumption. His mom discovered him lifeless from an overdose the day after he started dependancy remedy.
Whereas Nelson’s case didn’t contain the evaluation of doctor-sanctioned well being care directions like the kind ChatGPT Well being will hyperlink to, his case shouldn’t be distinctive, as many individuals have been misled by chatbots that present inaccurate info or encourage dangerous behavior, as we’ve got coated up to now.
That’s as a result of AI language fashions can simply confabulate, producing believable however false info in a means that makes it difficult for some customers to differentiate reality from fiction. The AI fashions that providers like ChatGPT use statistical relationships in coaching information (just like the textual content from books, YouTube transcripts, and web sites) to supply believable responses quite than essentially correct ones. Furthermore, ChatGPT’s outputs can vary widely relying on who’s utilizing the chatbot and what has beforehand taken place within the consumer’s chat historical past (together with notes about earlier chats).

