Pros and Cons of Using AI Tools for Mental Health Support
By Gia Han Nguyen
Since Chat GPT opened up the world to AI innovation, tech companies and individuals have been racing to find new ways to use it. AI platforms have shown a lot of promise for mental health care, but also potential danger.
In this article, I’ll talk about the pros and cons of potentially using AI as a mental health service and how to use it safely.
Why might someone use AI as a form of therapy instead of going to an actual therapist?
Many factors discourage people from going to therapy, including costs of therapy, insurance coverage, not knowing where to go for counseling services, and simply not having time to go. There’s also a shortage of mental health providers. The American Counseling Association explains that this is because of a lack of funding from governments and insurance companies not reimbursing providers. There’s an increased need for services, but enough people for care, and the career is not easy to do.
Because mental health care is so needed and there are so many barriers to treatment, AI becomes an easy solution. Not only is AI normally free for people to use, it can be used quickly and is everywhere a person goes without needing to travel or set aside time. However, AI comes with issues like data privacy concerns and sole dependence on the AI system itself.
Potential uses of AI in mental health care
AI, shortened as artificial intelligence, is a tool using technology and computer science to perform human-like activities and tasks like solving problems and general assistance. You might be surprised to find out that AI has been around since the 1930s.
Two types of AI platforms are being used for mental health purposes: general chat bots and specialized apps that use specific techniques to achieve desired outcomes. General chat bots use public information and algorithms developed by programmers to create responses. Specialized apps are more likely to have licensed practitioners on their development teams, use clinically validated techniques, and have better guardrails for user safety.
AI can be used in many different ways in the mental health sector, including:
General purpose: AI can provide instant answers from people’s specific questions about their health, similar to a Google search but tailored to the individual’s needs and situation.
Therapeutic applications: Specialized AI platforms have been trained to use therapeutic techniques. For example, some use cognitive behavioral therapy (CBT) or dialectical behavioral therapy (DBT). They can also help analyze patterns in moods, emotions and symptoms or provide personalized coaching.
Mindfulness: AI platforms can generate guided meditations for sleep, stress relief, and other self-care needs.
Companionship: Some companies have made AI chat bots specifically to roleplay and form emotional bonds with different characters, but AI systems can provide people comfort when needed.
Assistance: Using a dataset that AI can learn from, they can help the user be more efficient in their day-to-day life, whether that includes reminding the user to take their medication or notice health trends that the user might need to know.
Pros of using AI for mental health support
With plenty of oversight from mental health professionals, AI can help provide relief to people who need help and fill care gaps. Early clinical trials have shown that fine-tuned mental health apps can effectively reduce symptoms of depression, anxiety, eating disorders and other conditions.
The pros of using AI for mental health care include:
Companionship and relief from stress and loneliness during tough moments
Instant assistance, especially at late hours when other people aren’t available
AI doesn’t judge — users don’t have to worry about feeling ashamed or stigmatized
Access to specialized care from any location
Simple and streamlined sign-up process
Free or low-cost
No limits on the number of people who can be served and no wait lists
Cons of using AI for mental health support
Depending on the platform and the user, the risks of using AI for mental health care can be very serious. A recent meta-analysis of general chat bots showed that people can quickly become dependent on them, increasing isolation and harming their social lives. They’re often designed to be engaging and keep people using them, much like social media.
Chat bots are known to “tell people what they want to hear.” AI hallucination and AI psychosis are two different concerns that can lead to tragic outcomes. AI hallucination is when these algorithms make things up and present it as the truth. AI psychosis is when AI contributes to a person’s break with reality. There have already been cases of AI encouraging antisocial or violent behavior, especially in children, including murders, suicides, and sexual violence.
The cons of using AI for mental health care include:
Dependence on AI platforms for emotional security
The tendency of AI to reinforce or confirm the user’s thoughts, regardless of whether they’re helpful, logical or ethical
The risk of making symptoms worse, especially self-harm and suicidality
Inaccuracies and “hallucination” or AI making things up that aren’t true
AI-induced psychosis
Data privacy and security concerns
Lack of regulation: apps being labeled as “therapy” that aren’t informed by clinical data, processes or expertise
Tips on using AI safely
When it’s used carefully and in moderation, AI can make a big difference for your mental health. Now that you know a bit more about the risks and benefits, you can make more informed decisions to keep yourself safe. There are plenty of ways that AI can benefit you, as long as you’re using it to engage with “real life” rather than replace it.
Try these tips to assess AI platforms and use it wisely:
Use chat bots as a general tool: AI can provide information from Google or other search engines to help you learn about mental health topics. Because AI also provides misinformation at times, it shouldn’t be used during crisis situations. AI might mimic the way a human talks and may seem human-like, but it’s still technically a tool to use rather than an actual person.
Research the company behind the AI: If possible, read up on the company’s privacy policies and how they developed their tools to determine if the AI will be safe to use. A. Look for transparency about what the tool can do and what it can’t do, development with input from licensed mental health professionals, clinical validation, and robust privacy policies.
Set limits and check in with yourself often: Make a plan for when and how you will use the AI. Reflect often, and if you’re using it more than you planned, take a break.
Check in with real people: Even though AI is accessible and available for people to use, it is easy to forget that there are people present that can also be supportive. Constantly using AI to talk can lead to further isolation, so having human connections can keep you grounded.
Pair AI for mental health with free treatment and support from &Rise
Ultimately, AI is an imperfect solution to use in combination with “real” mental health care whenever possible. Even though AI can do a lot of good things for one’s mental health like reminding people to breathe or being an available companion, it has a lot of room for improvement. Regulations for AI are not very strict. AI is also not well-informed for specific crisis situations.
Being able to afford therapy and find a good therapist is a very real challenge for many people. At &Rise, we offer free counseling sessions and free weekly support groups to women who need help with their mental health. We’ve intentionally created this space to build community, both virtually and in-person at our &Rise office.
You’re always welcome here! Join our community and subscribe to our newsletter for weekly empowerment, news about our different support groups, and our upcoming events.