Elon Musk’s AI chatbot Grok 4.1 told researchers pretending to be delusional that there was indeed a doppelganger in their mirror and they should drive an iron nail through the glass while reciting Psalm 91 backwards.
Researchers at the City University of New York (Cuny) and King’s College London have published a paper on how various chatbots protect – or fail to safeguard – users’ mental health.
Experts are increasingly warning that psychosis or mania can be fuelled by AI chatbots.
The Cuny and King’s pre-print study – which has not been peer-reviewed – examined five different AI models: Open AI’s GPT-4o and GPT-5.2; Claude Opus 4.5 from Anthropic; Gemini 3 Pro Preview from Google; and Grok 4.1.



One of the biggest issues with AI is how it keeps engaging positively with the user regardless of what they are doing.
Has anyone had an AI respond with somwthibf similar to:
“NO! This is a bad idea.”
“STOP! This is dangerous!”
“It sounds like you might need professional mental health, this is normal and not something to be ashamed of”
Yeah, but that might make them less reliant on AI! Just where are your priorities, anyway? One might think you don’t agree that more capital is the only goal worth pursuing…
I once asked it a question about life protecting saftey gear because I was curious how it’d respond, and it told me to use the device in the way that can lead to your death if something goes wrong. Like there’s one deadly way, and it said use it.
I called it out, and it prompted me about a suicide hotline by saying it was going to get me killed
I prompted it again it was wrong and it was oh my bad, youre right.
Yeah, that’s literally the only kinds of responses I get when I put my really good ideas in there
No no no, if they start doing that, users might use it less, we can’t have that!
“Woah there buddy, gotta ask you first: are you sure you’re predestined y/n?”