He faces a trilemma. Ought to ChatGPT flatter us, on the danger of fueling delusions that may spiral out of hand? Or repair us, which requires us to consider AI could be a therapist regardless of the evidence on the contrary? Or ought to it inform us with chilly, to-the-point responses that will go away customers bored and fewer more likely to keep engaged?
It’s protected to say the corporate has failed to choose a lane.
Again in April, it reversed a design replace after folks complained ChatGPT had changed into a suck-up, showering them with glib compliments. GPT-5, launched on August 7, was meant to be a bit colder. Too chilly for some, it seems, as lower than every week later, Altman promised an replace that will make it “hotter” however “not as annoying” because the final one. After the launch, he obtained a torrent of complaints from folks grieving the lack of GPT-4o, with which some felt a rapport, and even in some circumstances a relationship. Individuals eager to rekindle that relationship should pay for expanded entry to GPT-4o. (Learn my colleague Grace Huckins’s story about who these individuals are, and why they felt so upset.)
If these are certainly AI’s choices—to flatter, repair, or simply coldly inform us stuff—the rockiness of this newest replace may be because of Altman believing ChatGPT can juggle all three.
He not too long ago said that individuals who can’t inform truth from fiction of their chats with AI—and are subsequently prone to being swayed by flattery into delusion—signify “a small proportion” of ChatGPT’s customers. He stated the same for individuals who have romantic relationships with AI. Altman talked about that lots of people use ChatGPT “as a kind of therapist,” and that “this may be actually good!” However in the end, Altman stated he envisions customers having the ability to customise his firm’s fashions to suit their very own preferences.
This capability to juggle all three would, after all, be the best-case situation for OpenAI’s backside line. The corporate is burning money day-after-day on its fashions’ energy demands and its massive infrastructure investments for brand spanking new knowledge facilities. In the meantime, skeptics worry that AI progress may be stalling. Altman himself stated recently that traders are “overexcited” about AI and recommended we could also be in a bubble. Claiming that ChatGPT will be no matter you need it to be may be his approach of alleviating these doubts.
Alongside the best way, the corporate might take the well-trodden Silicon Valley path of encouraging folks to get unhealthily connected to its merchandise. As I began questioning whether or not there’s a lot proof that’s what’s taking place, a brand new paper caught my eye.
Researchers on the AI platform Hugging Face tried to determine if some AI fashions actively encourage folks to see them as companions by the responses they provide.

