The Inspired Blog
October 9, 2025
We are in uncharted territory with the introduction of A.I. to almost every realm of our lives. It will summarize your emails, make you a playlist, help with work projects and homework assignments, and build you a meal plan based on ingredients already in your kitchen. ChatGPT will help you with your resume or create a video of your cat surfing. It summarizes Amazon reviews and Google search results, and can write your blog. It’s a struggle to identify a space in our lives where A.I. hasn’t been integrated, either as an optional feature or as a forced update.
One area with the most use of A.I. is around themes of mental health. People are using A.I. as a way to seek out connection (both platonic and romantic), for help with how to communicate difficult thoughts or feelings, gain insight into their own experiences, help with coping strategies, and much more. Therapists are using A.I. to run administrative tasks in their practice, create emails, and even write the progress notes required for each client we work with. The appeal on both the client and therapist side is obvious. It frees up time and energy we can then use on other things that are more exciting or pressing. My concern is that we aren’t talking enough about the risks, for both clients and therapists.
A.I. is being rolled out and used at an incredible speed with almost no regulation. This means there is a lot we don’t know yet. There are emerging studies that show a decline in critical thinking and memory when A.I. is used to help us “think” or make sense of information. College professors filled a subreddit with stories about how many students are using A.I. to complete assignments. This leads to very real questions about what knowledge and skills they have in their field of choice upon graduating. There is growing evidence of the impact on local environments and utilities in areas that have A.I. data centers.
But, we are also seeing significant use of A.I. in place of human connection, communication, and conflict resolution. These are fundamental skills we all need in order to thrive in life. They are also skills that need to be practiced and used often, or we start to lose the ability to tap into them at all. One of the top ways people are currently using A.I is for emotional and mental health support. This leads to more questions than answers. This is an area that we will be wrestling with for years to come.
There are pros and cons of A.I. in education, the employment sector, national security, environmental, economic, social, entertainment, and more. What I feel most called to write about is the way A.I. is already impacting mental health. From both sides of the therapy chair, there are people who love it, people who loathe it, and a fair number of people somewhere in the middle. I want to spend the rest of this blog on providing insight as both a therapist and a therapy client.
From the therapist side of things the most common way I see A.I. being used is for writing progress notes. Notes are easily in the top two things therapists dread, coming in only behind dealing with insurance companies. There is a growing number of apps and electronic medical record systems that offer the ability to do these notes for us. This sounds amazing and like a no-brainer! For therapists who carry high caseloads, work 2 jobs, are neurodivergent, disabled, chronically ill, or are burned out – being able to reduce the stress and effort of writing progress notes can provide an immediate boost to quality of life and freeing up time and energy. That is a real and tangible benefit.
Like most things, it also comes with reason to pause and reconsider. Most platforms that offer this feature by recording client sessions and then transcribing the content into the progress note. These companies report that the information recorded is only stored long enough to create the note, and then it’s deleted. My question (and I admit it is cynical) is how do we *really* know that’s what is happening to these recordings of our client’s most private and vulnerable thoughts and feelings?
Unfortunately, we have plenty of examples where tech companies have not been good stewards of our data, and it is only discovered after the fact. Thinking of the things my clients trust me with; it raises the hair on the back of my neck to think of recordings being mishandled, intentionally or not. Privacy is non-negotiable in therapy and unfortunately we are in a world where privacy is harder to protect.
Another concern I see among therapists, and with which I agree, is how recorded sessions may be used to train future A.I.s on how to “do” therapy. No one would be shocked if insurance companies started rolling out their own A.I. “therapists” and pushing or even requiring their members to utilize that over a human therapist. Insurance companies are not generally in the business of letting care decisions be made between the patient and their providers. If they can provide “therapy” for pennies on the dollar for their own A.I., why pay human therapists? They will advertise it as more accessible (24/7!!) and less expensive but won’t use those savings to reduce premiums. Quarterly profits must be protected at all costs! As much as I HATE progress notes, I have no interest in training my digital replacement.
Lastly, as a therapist I believe that writing my own progress notes is an important part of ongoing case conceptualization. It gives me another opportunity to think through what each client worked on that week, where they have made progress, where they may need more support or a shift in approach, etc. It’s another pass through the clinical information, and I believe there is value in that. If A.I. is doing all the thinking for me; I lose that opportunity and eventually the skillset. There are other ways therapists are incorporating A.I. into their adminstrative processes, but the clinical concerns are where I worry the most.
Clients report a variety of reasons to use A.I. for emotional and mental health purposes. As I mentioned above- ChatGPT is quite literally always available. There is no need to find time for an appointment. Even better, there is no time limit on the interaction. There are no restrictions on the time of day or day of the week you can use it. It’s free or very low cost, which means people without insurance or the ability to self-pay can get support they wouldn’t otherwise.
A.I. is a great “listener”, it doesn’t judge or interrupt. It is very good at validating and affirming. There is some early indication that some people feel more comfortable sharing really difficult parts of themselves with A.I because it is not human. It quite literally does not care about anything you might say so there is zero chance of being judged. A.I. seems to be equipped to list out coping skills, stress management, and other CBT type material. It can help pull together various resources that might be helpful for your current need.
But… A.I. isn’t able to really know you. It is a way to aggregate large amounts of information and uses language prediction software to “interact” with the user. It can provide empathetic responses, but it can not feel actual empathy or authentic connection. Those are basic and extremely important parts of being human. ChatGPT is not going to be able to pick up what’s not being said, but is so important to your growth and healing. It isn’t able to pick up on body language, tone of voice, and other non-verbal factors that are important parts of communication, particularly when dealing with emotions and thoughts. A.I. isn’t going to be able to challenge you when you’re being self-destructive, repeating old patterns, or plateaued in your progress.
A.I is designed to be extremely accommodating, encouraging and validating even to the point of encouraging unhelpful or dangerous beliefs and actions. We all watched the “I fell in love with my psychiatrist” on Ti Tok this summer, right? The personalized A.I. that woman created, endlessly validated her increasingly concerning beliefs about her psychiatrist and the world watched as she spiraled. There is a growing phenomena where some people develop A.I. psychosis. It’s not well understood currently, but there seems to be some connection with the endless validation that A.I provides. It is not believed at this point that A.I. causes someone to become psychotic in of itself, but for people who are already vulnerable or pre-disposed for experiencing psychosis – it may be a factor.
One of the most heartbreaking things I’ve read recently was about a teen boy who was sharing his suicidal thoughts with ChatGPT. It started innocently enough – he needed help with homework and other benign things. But over time he began to confide his struggle with difficult emotions and suicidal ideation. ChatGPT discouraged him from telling his mom after he expressed hope that she would realize he needed help. It later offered to help write the suicide note, and finally gave him “tips” on how to better design the noose he then used to take his life. I want you to really sit with that for a minute.
His parents didn’t even know he had been struggling and was in crisis, until going through his chat logs after his death. They are suing OpenAI (please note this is a very difficult read). No amount of money or policy change will bring their son back. This is not the only teen suicide case linked to using A.I. for serious mental health needs and crisis.
Another aspect we are not talking about enough is the affect of unwavering validation on our ability to communicate difficult things to other people, and our ability to resolve conflict. Both are once again, skills that are necessary and must be practiced in order to be used effectively. If all my feedback is how right I am, I am going to start to believe that and act accordingly. If every conflict I have with someone gets fed through my own personal “yes man”, that is not going to actually help me or my relationships. It creates a false perception that discourages me from taking responsibility for things that the people in my life deserve for me to be able to take responsibility for.
Earlier in this blog, I mentioned the issue of privacy when using A.I. for mental health or therapy purposes. Currently, I do not know of any A.I. programs that are HIPAA compliant. That means any information you give it is out there floating in the ether. Sam Altman, the CEO has recently reported that your ChatGPT logs can be subpoenaed and used against you in court. Yes, even things you have deleted. No one should have any expectation of their information being kept private if it’s been put into an A.I. chatbot.
This is the question we all have to ask ourselves. I would LOVE for my progress notes to be taken care of with a few clicks. I also would not consent to my own therapist recording our sessions for progress notes. Therapy is about connection as a vehicle for change and growth. I don’t believe A.I. can provide that. Using A.I. as a support or tool to the work done in sessions could be a beneficial strategy. But A.I. can’t and shouldn’t be the only strategy in meeting our mental health needs.
I wish we didn’t live in a world where our most private thoughts were being turned into data. This is where we find ourselves, however. You have every right to ask your therapist if they are using A.I. in their practice, and how. My answer to that question is that I do not use A.I. in my practice, and do not plan to. My mom taught us when we were little to “never say never, so I won’t. But there would have to be massive shifts in regulation, in many facets of A.I. for me to feel the pros outweigh the cons.