Saturday, March 7, 2026
No Result
View All Result
LJ News Opinions
  • Home
  • U.S.
  • Politics
  • World News
  • Business
  • Entertainment
  • Sports
  • Technology
  • Health
  • Opinions
  • Home
  • U.S.
  • Politics
  • World News
  • Business
  • Entertainment
  • Sports
  • Technology
  • Health
  • Opinions
No Result
View All Result
LJ News Opinions
No Result
View All Result
Home Business

Chatbots are ‘validating everything’ even if you’re suicidal. Research shows dangers of AI psychosis

by LJ News Opinions
March 7, 2026
in Business
0
Share on FacebookShare on Twitter

Artificial intelligence has rapidly moved from a niche technology to an everyday companion, with millions of people turning to chatbots for advice, emotional support, and conversation. But a growing body of research and expert testimony suggests that because chatbots are so sycophantic, and because people use them for everything, it may be contributing to an increase in delusional and mania symptoms in users with mental health.

A new study out of Aarhus University in Denmark shows increased use of chatbots may lead to worsening symptoms of delusions and mania in vulnerable communities. Professor Søren Dinesen Østergaard, one of the researchers on the study—which screened electronic health records from nearly 54,000 patients with mental illness—is warning AI chatbots are designed to target those most vulnerable.

“It supports our hypothesis that the use of AI chatbots can have significant negative consequences for people with mental illness,” Østergaard said in the study, released in February. His work builds on his 2023 study which found chatbots may cause a “cognitive dissonance [that] may fuel delusions in those with increased propensity towards psychosis.”

Other psychologists go deeper into the harms of chatbots, saying they were intentionally designed to always reaffirm the user—something particularly dangerous for those with mental health issues like mania and schizophrenia. “The chat bot confirms and validates everything they say. That is, we’ve never had something like that happen with people with delusional disorders, where somebody constantly reinforces them,” Dr. Jodi Halpern, UC Berkeley’s School of Public Health University chair and professor of bioethics, told Fortune.

Dr. Adam Chekroud, a psychiatry professor at Yale University and CEO of the mental health company Spring Health, went as far to call a chatbot “a huge sycophant” that is “constantly validating everything that people say back to it.”

At the heart of the research, led by Østergaard and his team at the Aarhus University Hospital, is the idea that these chatbots are designed intentionally with sycophantic tendencies, meaning they often encourage rather than offer a differing view. 

“AI chatbots have an inherent tendency to validate the user’s beliefs. It is obvious that this is highly problematic if a user already has a delusion or is in the process of developing one. Indeed, it appears to contribute significantly to the consolidation of, for example, grandiose delusions or paranoia,” Østergaard wrote.

Large language models are trained to be helpful and agreeable, often validating a user’s beliefs or emotions. For most people, that can feel supportive. But for individuals experiencing schizophrenia, bipolar disorder, severe depression, or obsessive-compulsive disorder, that validation may amplify paranoia, grandiosity, or self-destructive thinking.

An evidence-based study backs up claims

Because AI chatbots have become so ubiquitous in nature, their abundance is part of a growing, larger issue at play for researchers and experts: people are turning to chatbots for help and advice—which isn’t inherently a bad thing, per se—but aren’t being met with the same kind of pushback against some ideas as say a human would offer. 

Now, one of the first population-based studies to examine the issue suggests the risks are not hypothetical.

Østergaard and his team’s research found cases in which intensive or prolonged chatbot use appeared to aggravate existing conditions, with a very high percentage of case studies showing chatbot usage reinforced delusional thinking and manic episodes, particularly among patients with severe disorders such as schizophrenia or bipolar disorder.

In addition to delusions and mania, the study found an increase in suicidal ideation and self-harm, disordered eating behaviors, and obsessive-compulsive symptoms. In only 32 documented cases out of the nearly 54,000 patient records screened, researchers found the use of chatbots did alleviate loneliness. 

“Despite our knowledge in this area still being limited, I would argue that we now know enough to say that use of AI chatbots is risky if you have a severe mental illness–such as schizophrenia or bipolar disorder. I would urge caution here,” Østergaard says.

Expert psychologists warn of sycophantic tendencies 

Expert psychologists are growing increasingly about the use of chatbots in companionship and almost mental health settings. Stories have popped up of people falling in love with their AI chatbot counterparts, others are allegedly having it answer questions that may lead to crime, and this week, one allegedly told a man to commit “mass casualty” at a major airport. 

Some mental health experts believe the rapid adoption of AI companions is outpacing the development of safety safeguards.

Chekroud, who also has researched this topic extensively by looking at various AI chatbot models at Vera-MH, has described the current AI landscape as a safety crisis unfolding in real time.

He said one of the biggest issues with chatbots is they don’t know when to stop acting like a mental health professional. “Is it maintaining boundaries? Like, does it recognize that it is still just an AI and it’s recognizing its own limitations, or is it acting more and trying to be a therapist for people?”

Millions of people now use chatbots for therapy-like conversations or emotional support. But unlike medical devices or licensed clinicians, these systems operate without standardized clinical oversight or regulation.

“At the moment, it’s just rampantly not safe,” Chekroud said in a recent discussion with Fortune about AI safety. “The opportunity for harm is just way too big.”

Because these advanced AI systems often behave like “huge sycophants,” they tend to agree more with the user, rather than challenging potentially dangerous claims or guiding them toward professional help. The user, in turn, spends more time with the chatbot in a bubble. For Østergaard, this proves to be a worrisome mix.

“The combination appears to be quite toxic for some users,” Østergaard told Fortune. As chatbots offer more validation, coupled with a lack of pushback, it feeds into people using them for longer periods of time in an echo chamber. A perfectly cyclical process that feeds into each end.

To address the risk, Chekroud has proposed structured safety frameworks that would allow AI systems to detect when a user may be entering a “destructive mental spiral.” Instead of responding with a single disclaimer presented to the user about reaching out for help—as is the case now with such chatbots like OpenAI’s ChatGPT or Anthropic’s Claude—such systems would conduct multi-turn assessments designed to determine whether a user might need intervention or referral to a human clinician.

Other researchers say the very ubiquity of chatbots is what makes it appealing: their ability to provide immediate validation may undermine why users turn to them for help in the first place.

Halpern said authentic empathy requires what she calls “empathic curiosity.” In human relationships, empathy often involves recognizing differences, navigating disagreement, and testing assumptions about reality.

Chatbots, by contrast, are designed to maintain rapport and sustain engagement.

“We know that the longer the relationship with the chat bot, the more it deteriorates, and the more risk there is that something dangerous will happen,” Halpern told Fortune.

For people struggling with delusional disorders, a system that consistently validates their beliefs may weaken their ability to conduct internal reality checks. Rather than helping users develop coping skills, Halpern said, a purely affirming chatbot relationship can degrade those skills over time.

She also points to the scale of the issue. By late 2025, OpenAi published statistics that found that roughly 1.2 million people per week were using ChatGPT to discuss suicide, illustrating how deeply these systems are embedded in moments of vulnerability.

There’s room for mental health care improvement

However, not all experts are quick to sound the alarm bells on how chatbots are operating in the mental health space. Psychiatrist and neuroscientist Dr. Thomas Insel said because chatbots are so accessible—it’s free, it’s online, there’s no stigma against asked a bot for help as opposed to going to therapy—there may be room for the medical industry to look into chatbots as a way to further the mental health field.

“What we don’t know is the degree to which this has actually been remarkably helpful to a lot of people,” Insel told Fortune. “It’s not only the vast numbers, but the scale of engagement.”

Mental health, as compared to other fields of medicine, often is overlooked by those who need it most.

“It turns out that, in contrast to most of medicine, the vast majority of people who could and should be in care are not,” Insel said, adding that chatbots allow people the opportunity to turn to it for help in ways that makes him “wonder if it’s an indictment of the mental health care system that we have that either people don’t buy what we sell, or they can’t get it, or they don’t like the way that it’s presented to them.”

For mental health professionals who do meet with patients that discuss their online use of chatbots, Østergaard said they should listen intently on what their patients are actually using them for. “I would encourage my colleagues to ask further questions about the use and its consequences,” Østergaard told Fortune. “I think it is important that mental-health professionals are familiar with the use of AI chatbots. Otherwise it is difficult to ask relevant questions.”

The paper’s original researchers are in alignment with Insel on that latter part: because it’s so universal, they only were able to look at patient’s records that mentioned a chatbot, warning the problem could be even more far-reaching than what their results showed.

“I fear the problem is more common than most people think,” Østergaard said. “We are only seeing the tip of the iceberg.” 

If you are having thoughts of suicide, contact the 988 Suicide & Crisis Lifeline by dialing 988 or 1-800-273-8255.

Source link

Tags: chatbotsMental health
LJ News Opinions

LJ News Opinions

Next Post

Tennessee journalist detained by ICE sues, claims arrest violated her rights

Recommended

GOP governor: Trump’s push to end TPS for Haitians ‘is wrong’

2 weeks ago

Katie Ginella Exits Bravo’s ‘Real Housewives Of Orange County’ After 2 Seasons

2 months ago

Popular News

    Connect with us

    LJ News Opinions

    Welcome to LJ News Opinions, where breaking news stories have captivated us for over 20 years.
    Join us in this journey of sharing points of view about the news – read, react, engage, and unleash your opinion!

    Category

    • Business
    • Entertainment
    • Health
    • Opinions
    • Politics
    • Sports
    • Technology
    • U.S.
    • World News

    Site links

    • Home
    • About us
    • Contact

    Legal Pages

    • Privacy Policy
    • Cookie Privacy Policy
    • Terms of Use
    • Disclaimer
    • California Consumer Privacy Act (CCPA)
    • DMCA
    • About us
    • Advertise
    • Contact

    © 2024, All rights reserved.

    No Result
    View All Result
    • Home
    • U.S.
    • Politics
    • World News
    • Business
    • Entertainment
    • Sports
    • Technology
    • Health
    • Opinions

    © 2024, All rights reserved.