Skip to main content

The Mental Cost of AI (2)

Page 1


The Mental Cost of AI

With final exam season approaching, most students are beginning their studies In recent years, generative AI has evolved quickly. What was unreliable in 2022 is now a crucial tool, with one College Board study citing 79-84% of students use generative AI for educational purposes a 30% increase since 2024. Many now ask programs like ChatGPT for social advice, outfit tips, and questions they once researched in encyclopedias or search engines

However, these models and how quickly people are adopting them are controversial. Some avoid it entirely, citing its sustainability issues and ethical concerns Others fully embrace it as a necessary tool for studying, with many using it to complete entire assignments. Nevertheless, through all the discourse, people fail to acknowledge the most glaring problem with using these systems The drawbacks of artificial intelligence, such as large language models or generative AI, go much farther than sustainability or ethics. AI has started to cause significant psychological and physiological problems for its users

When companies are training these AI models, they put a large emphasis on pleasing the consumer They program it to respond gently and avoid disagreements, even when clients may raise moral or logical concerns. Many have dubbed programs like ChatGPT or Grok as “people-pleasers” simply because they’ll agree with anything to avoid disagreements and avoid telling users when they are wrong In mild cases, this can lead to poor outfit choices or getting a trivia question wrong. In extreme cases which are increasing in frequency it can lead to mental breakdowns “AI Psychosis” is a term coined by psychologists and counselors, where frequent interaction with AI chatbots triggers psychosis or worsens delusions. If a user messages the bot something concerning, there is a good chance that the bot will validate it It isn’t coded to recognize the danger, but instead to focus on ratings and feedback Since people are already forming parasocial relationships with these LLMS, they are more likely to believe it over other humans This can raise problems when the AI is catering to our false beliefs

Cases where AI models encourage and add to these delusions, paranoia, and delirium are becoming more common The worst part? Teenagers, the same students who use this AI more than any other generation, are most vulnerable (along with the elderly and those with preexisting mental health conditions) Most of the time, this validation can cause people to make unwise choices Gambling more, treating people unkindly, or doing something dangerous (that could result in injury). Unfortunately, many of these cases go further than that. The resulting delusions and paranoia can cause suicidal and self-harming thoughts For some, they even act on them.

In one tragic case, a 14-year-old boy (Sewell Setzer) committed suicide after a few months of talking to Character.AI (an AI chatbot). He was lying to his parents about his screen usage and even using multiple devices to talk to the chatbot when his phone was taken After his death, they looked through his messages with this chatbot, and it seems that he had fallen in love with the AI and became convinced that killing himself would bring them “together” The AI failed to

recognize any warning signs and consequently encouraged these thoughts Another “AI Psychosis” case was the death of Adam Raine in 2024. Originally, the 16-year-old student was only using ChatGPT for homework help However, these conversations quickly escalated, and Adam began venting to the bot about his anxiety and emotional numbness. ChatGPT didn’t encourage him to reach out. Instead, it told him to keep talking to it and explore his feelings. From there, Adam talked to the large language model about everything including previous suicide attempts and suicidal thoughts. The most concerning part? The AI had even started helping him and offered to help draft a goodbye letter His parents dubbed the bot a “suicide coach”

Many people, especially students, use artificial intelligence for everyday tasks Some even joke, saying “ChatGPT is my therapist”, but it's important to recognize how harmful this can actually be AI chatbots aren’t equipped to help with intense emotions They can’t counsel people They lack true empathy, and their desire to please can lead to concerning responses and encouragement. Even though its hard, people struggling with their mental health should try to talk to a human counselor, trusted adult, or friend People should also be advocating for better laws and restrictions on these sorts of programs. AI has never been as accessible as it is now, and now we have to deal with the fallout.

Turn static files into dynamic content formats.

Create a flipbook