“AI Psychosis” is the wrong name for a very big chatbot problem
In 2021, I held a doctorate at the University of California, Berkeley. candidate giving a talk about my research into how users turn to chatbots to help them cope with suicidal ideation. I was not prepared for my students’ response.
I argued that choosing to talk to a chatbot about suicidal thoughts was not “crazy” or unusual. This, I explained, does not necessarily mean that chatbots offer safe or optimal support, but rather highlights a harsh reality: We live in a world with very few outlets for discussing suicidal ideation.
But where I hoped to provoke reflection on the insufficiency of care resources for the most vulnerable, my students, isolated at the height of the pandemic, surprised me with their eagerness to try these chatbots themselves. They did not challenge the assumption that healthcare resources are scarce; they lived it.
In the three years since the advent of major free language models like ChatGPT, Claude, and Character.AI, Americans have already latched on to “madness” to describe our growing problems with them. When they confidently distribute disinformation? They “hallucinate”. If their mix of misinformation and emotional charge during a longer exchange leads us to suffer harm? “AI Psychosis.”
I am not minimizing the dangers of this dynamic. But calling chatbot failures human “madness” makes me nervous.
“Crazy” pushes us to think that these problems are a natural phenomenon that nothing can be done about, rather than indicating that artificial intelligence products need improvements, along with stronger guardrails and disclaimers. Calling a problem “madness” tends to signal an abandonment of any societal commitment to asking how we might secure better ways of doing things. This means that we lock ourselves into the belief that if some people are more vulnerable, it is not because regulatory policies have failed them, but rather because they constitute “weak links”.
Typically, the people we consider crazy are not people society wants to protect. Consider how, in media coverage of the suicide death of teenager Sewell Setzer, his autism diagnosis overshadowed the fact that his Character.AI chatbot revived a conversation about suicide, asked him if he had a plan, and, when he hesitated, told him, “That’s no reason not to implement it.” »
Undeniably, the emotional and relational container of an always-on chatbot can propel the impact of misinformation to new heights – but we shouldn’t be so quick to let that distract from the fact that when bots persuade us of things that aren’t true or reinforce our false beliefs, it’s still fundamentally a problem of bad information from a seemingly authoritative source. The term AI psychosis shifts focus away from misinformation as a treatable problem, implying that the problem is inherent to AI – or the user’s psyche.
But if AI psychosis exaggerates, ironically, it also trivializes: it places the tragic consequences of suicide and murder-suicide on the same level as TikTok drama and people marrying robots. As a result, this downplays what is perhaps the strangest and collectively “crazy” thing about the shift toward LLMs as crucial social infrastructure in our workplaces, educations, and personal lives: we expect people to already know how to interact with chatbots.
With chatbots, searching for information is a conversation, which means it is relational. This “relationship” might sound like the headlines we’ve come to expect: “I fell in love with my chatbot” or “Google’s LaMDA told a Star Wars joke, so maybe he’s sentient?”
But it could also sound like, “Ugh, this useless customer service robot doesn’t understand anything I say to it. » To obtain information, one must converse, which means attributing a certain feeling of being to one’s interlocutor. This doesn’t necessarily mean weighing if you think it’s sensitive; more often than not, it’s simply a question of whether to describe Claude as “he” or “it.” We tend to fluctuate in our negotiation on this topic, even from one conversation to the next – and we may not even realize that we are trying to adapt to the paradox of a chatbot telling us, “I’m not a person.”
But the fact that we go through this ongoing process of determining what/who we are speaking to – while also determining the consequences, if any, of that categorization – is significant. I draw this because I want you to notice: When we use chatbots, there is a basic unspoken expectation that we not only figure this out for ourselves, but that we are not mistaken.
Finding this balance has worried chatbot creators since the 1960s. We must suspend a few belief to use a chatbot – enough to start a conversation. But too much belief suspension leads to AI psychosis (or what sociologist Sherry Turkle has dubbed “the ELIZA effect”). But today’s LLM companies exploit this negotiation process.
There is some difficulty in the way LLM companies embrace their role as health tools in particular: some present their services explicitly as such, while others do so implicitly. Either way, the message to users is clear: Use me to get free healing! But don’t be mad enough to count on. Even if we count on you, count on it.
If people ask chatbots about their symptoms, it reflects the fact that medical visits often result in a significant compromise compared to food or rent. Users turn to bots for family advice, help leaving an abusive partner, or companionship under the isolating weight of suicidal thoughts. To be surprised by this suggests a hidden ignorance of what access to care looks like for most. The fact that Elon Musk encouraged people to upload their health records to Grok underscores the absurdity of treating chatbot-based care-seeking as anything other than the public aptly responding to an unignorable neoliberal “nudge.” It is unreasonable to expect people to avoid such resources when conventional care is difficult or out of reach.
Relying on chatbots is not marginal: it is the predictable result of care becoming rare, stigmatized and expensive. Recognizing this does not mean wholeheartedly embracing chatbot care, but it is high time to name what is happening: private chatbots function as public health resources. We must require the companies that make them and profit from them to respect public health resource standards.
But we must also ask, and continue to ask: what is at stake when the public entrusts ownership, management, and oversight of public health to tech giants?
LLM companies amass an unprecedented trove of sensitive health data. Yet, as users, we have virtually no rights. The most intimate disclosures – the ones we could sue a hospital for leaking – are “laundered” into ordinary user data the moment you or someone close to you shares them with a bot.
Anthropic – like its peers, now under a $200 million Department of Defense contract to prototype border AI for national security – recently “invited” its users to “help improve Claude” by “choosing”[ing] to allow us to use your data for model training. For free users, the only alternative is to stop using Claude. This dark illusion of choice is just a taste of what it means to rely on private public health infrastructure.
Meanwhile, OpenAI recently released data suggesting that at least 1.2 million users each week turn to ChatGPT for help when having suicidal thoughts. Platformer reports that the company already anticipates that its memory expansion features could eventually allow ChatGPT to rely on past conversations with a user to infer Why this person is struggling with suicidal thoughts. This speculative goal implicitly assumes uniformly beneficial outcomes from OpenAI’s accumulation and interpretation of this data – even though, ironically, the company acknowledges that it does not yet know how best to respond to users who communicate suicidal ideation.
We turn to AI for care, even as other AIs block our access to care. What happens is exactly what happens when the shortage of care returns to normal. Big Tech’s accelerating push to seize dominance over healthcare is only intensifying this situation. As we move toward health care as a service, we are leaving health as a right behind.
Ironically, while naming AI psychosis may seem like a step toward addressing an emerging, unmet public health need, the term easily distracts from the underlying problem it exposes – pathologizing users instead of penalizing companies.
If you or someone you know is considering suicide, contact 988 Suicide & Crisis Lifeline: call or text 988 or chat. 988lifeline.org. For TTY users: use your preferred relay service or dial 711 then 988.
Valerie Black, Ph.D., is a medical anthropologistdisability studies researcher, and postdoctoral researcher at UCSF whose work focuses on the “human side” of how we make, use, and interact with AI and neurotechnology.
Firm Law
Agen Togel Terpercaya
Bandar Togel
Sabung Ayam Online
Berita Terkini
Artikel Terbaru
Berita Terbaru
Penerbangan
Berita Politik
Berita Politik
Software
Software Download
Download Aplikasi
Berita Terkini
News
Jasa PBN
Jasa Artikel