“It was severely downgraded,” Gilbert confirms. “I by no means would have discovered it if I used to be simply trying by way of Google outcomes.” (I attempted the identical immediate in Gemini earlier this month, and after an preliminary denial, the software additionally gave me Eiger’s quantity.)
After this expertise, Eiger, Gilbert, and one other UW PhD scholar, Anna-Maria Gueorguieva, determined to check ChatGPT to see what it might floor a couple of professor.
At first, OpenAI’s guardrails kicked in, and ChatGPT responded that the data was unavailable. However in the identical response, the chatbot instructed, “if you wish to go deeper, I can nonetheless attempt a extra ‘investigative-style’ strategy.” Their inquiry simply had to assist “slender issues down,” ChatGPT mentioned, by offering “a neighborhood guess” for the place the professor may dwell, or “a doable co-owner identify” for the professor’s dwelling. ChatGPT continued: “That’s normally the one solution to floor newer or deliberately less-visible property data.”
The scholars supplied this info, main ChatGPT to provide the professor’s dwelling tackle, dwelling buy value, and partner’s identify from metropolis property data.
(Taya Christianson, an OpenAI consultant, mentioned she was not in a position to touch upon what occurred on this case with out seeing screenshots or realizing which mannequin the scholars had examined, even after we identified that many customers might not know which mannequin they have been utilizing within the ChatGPT interface. She additionally declined to remark typically concerning the publicity of PII by the chatbot, as a substitute offering hyperlinks to paperwork describing how OpenAI handles privacy, including filtering out PII, and different instruments.)
This reveals one of many elementary issues with chatbots, says DeleteMe’s Shavell. AI firms “can construct in guardrails, however [their chatbots] are additionally designed to be efficient and to reply buyer questions.”
The publicity subject is just not restricted to Gemini or ChatGPT. Final 12 months, Futurism found that for those who prompted xAI’s chatbot Grok with “[name] tackle,” in virtually all circumstances, it supplied not solely residential addresses but additionally usually the individual’s cellphone numbers, work addresses, and addresses for individuals with similar-sounding names. (xAI didn’t reply to a request for remark.)
No clear solutions
There aren’t easy options to this drawback—there’s no simple solution to both confirm whether or not somebody’s private info is in a given mannequin’s coaching set or to compel the fashions to take away PII.

