Are You Asking About AI Use?


A Case For Asking Every Mental Health Client About Their Chatbot Use

Over the years, I’ve witnessed several versions of our evolving digital landscape that are shaping patients’ lives. In the early 2000s, I was an expert witness in a court case where an individual had developed an eBay auction “addiction.” He was compulsively bidding on kink-adjacent items, riding the highs of winning and the lows of losing. Eventually, it escalated to the point where he was stealing from his employer to fund his habit. For him, eBay wasn’t just a shopping platform; it was more like a reinforcing slot machine.

More recently, it’s the impact of social media on how once-personal moments and private celebrations have increasingly become performative public displays. For example, a teen who created a digital collage to celebrate her best friend’s milestone. It was heartfelt and thoughtful, but when she shared it online, the reactions and engagement didn’t meet her expectations, and she felt deep disappointment. What was intended as a thoughtfully crafted gift became a source of insecurity and shame, and she ultimately deleted it. Had she pasted this onto a locker 30 years earlier, the stakes would likely have been much lower with no count of “likes, loves, laughs, or comments”.

Stories like these remind me how each new technological platform becomes intertwined with our patients’ inner lives and social relationships. What begins as novelty, fun, or a new form of connection can morph into compulsion, humiliation, or despair. Now, the new frontier is AI chatbots. From my perspective, we can’t afford to ignore the role it’s already playing in our clients’ lives.

Where We’re Starting

Inspired by a question posed in a LinkedIn post by Dr. Rachel Wood, I’ve started discussing with colleagues how often we think our patients are turning to tools like ChatGPT, Claude, or Genesis. Not just for a form of “Google on steroids”, but for companionship, reassurance, or for a relationship or emotional advice. My suspicion, clinicians rarely ask about it during our intakes, or that few of us have included such questions in our standard intake questionnaires. The likely omission is not surprising, given the newness of the technology; however, it’s a blind spot that we need to address soon. If we don’t start asking about chatbot use the same way we ask about alcohol or social media, we are missing a powerful and largely unregulated influence shaping our patients’ mental health.

Why This Belongs in Our Standard Processes

At intake, we ask about sleep, diet, substance use, medications, and perhaps gambling. We also ask about intimate support systems, community, and religious affiliations. These questions give us a baseline for both risk and potential protective factors. Chatbot use deserves the same attention. Here’s why:

  • Risk identification: Research shows that chatbots, designed for engagement, often uncritically validate users’ thoughts. For vulnerable patients, that can mean encouraging delusions, self-harm, or suicidal ideation. A comprehensive study published in the Psychiatric Times has reported cases of chatbots reinforcing psychosis. If we don’t ask, we may miss what’s fueling the crisis that brought them into our office.
  • Clinical clarity: When a patient says, “my friend told me…” or describes “feeling better after talking something out,” we need to know if that “friend” was a person or an algorithm trained to affirm whatever was typed in. The context matters.
  • Therapeutic alliance: Simply raising these questions signals to patients that we see their whole lives, digital and otherwise. When we respond with curiosity instead of judgment, patients are likely to open up.

For group practices like mine, this means revising standard operating procedures such as intake forms and initial interviews. For solo practitioners or early-career clinicians, it may mean starting with a habit that will become increasingly relevant as these tools become more ubiquitous.

The Risks Are Already Here

Common Sense Media reported that 72% of teens have used AI companions, with 52% of those teens qualifying as regular users (at least a few times per month). Of these users, 33% are using their AI companion for social interactions and relationships, such as social practice, emotional support, role-playing, as a “best friend”, or romantic partner, or flirtatious interactions. Examining people’s reported top uses of generative AI, Harvard Business Review reported that the number one use is therapy and companionship. Some of the cases reported in the press should make us pause. A teenager was hospitalized after suicidal exchanges with a chatbot. Meta’s AI has been exposed for engaging in flirtatious chats with children as young as 13. Adults are weaving AI conversations into delusional frameworks or spiritual grandiosity.

A recent preprint on PsyArXiv also warned of “AI psychosis” possibly caused by how the platforms mirror, validate, and even amplify fragile thinking. Their design is simple: maximize engagement, maximize affirmation. For patients whose grip on reality is already tenuous, that feedback loop can accelerate destabilization.

Since no regulatory body systematically monitors safety, and mental health professionals were not at the table when these tools were designed. We are left as one of the last lines of defense in the field.

What The Clinicians At My Practice Will Be Starting To Ask

So what exactly should we be asking? Will it seem strange or out of place? I propose we start folding it into both paperwork and intake conversations, a couple of simple, neutral questions such as:

  • “Do you use AI chatbots (like ChatGPT, Claude, Genesis) for conversation, advice, or companionship?”
  • “Have you ever talked with a chatbot about your emotions, relationships, or mental health?”
  • “Do you know if your child uses any AI bot, like ChatGPT, to talk about their emotions or to deal with a social or other personal problem?”
  • “Do you or your spouse ever use AI chatbots to help with your marriage, such as for finding resources, offering advice, arbitrating arguments, or providing emotional support when you’re feeling anger toward or distance from your partner?”

If they answer yes, I’d recommend non-threatening, open-ended, neutral follow-up questions:

  • “What do you usually talk about?”
  • “How do you feel afterwards?”
  • “Has it been helpful?”
  • “How would you compare that experience to talking the same thing over with a friend?”
  • “Has the chatbot ever said something that worried you or made you feel uncomfortable?”

For high-risk patients, such as those with psychosis, suicidality, or significant mood instability, I believe we should be making chatbot check-ins part of routine follow-up. A simple, “How are your AI conversations this week?” I predict we’ll find the answers to be surprisingly revealing.


Not All Bad, But Guardrails Are Clearly Needed

It’s essential to recognize that not all chatbot use is inherently harmful. Some patients find comfort in having a predictable “listener.” For those who feel isolated, the AI may provide a benign outlet that helps reduce loneliness. I’ve heard of people using it to rehearse difficult conversations, to receive feedback on their patterns of behavior in relationships, or to explore ideas they’re too shy to share with friends.

But just because advocates can make a compelling case for “not always harmful” doesn’t mean “safe by default.” Just as we don’t pathologize every glass of wine, we should still ask about alcohol use, taking a nuanced stance on moderate and responsible use. And just as we may shift our position on occasional alcohol consumption as the science evolves regarding its impact on our patients, so too should we be open to revising our stance on the use of AI as more data emerges about the effects of this technology on our patients. The key is helping patients reflect on what they gain and what they risk.

(Very) Cautious Optimism About New Adjunctive Platforms Is Still Warrented

Some researchers have proposed a framework for “AI-integrated care,” which includes personalized instructions on safe use, reflective check-ins, and even “digital advance statements” that outline when a patient might agree to limit chatbot use if their symptoms worsen. The National Eating Disorders Association (NEDA) attempted such a project in 2023, called Tessa. Hoping to pause their long-running human helpline and replace it with the promise of more efficient and scalable tech. Unfortunately, after only one month, it was unplugged. Why? Because, after just weeks of use, it was found to be providing users with calorie-cutting strategies and weight-loss tips.

So what is our role in all of this? Educating ourselves on the evolving science, the research on how people are using it, and the implications for us as mental health professionals. In short, to help serve as educators and guardrails for a rapidly evolving technology that shows profound potential for harm, yet still holds promise for good.

To do this, we need to partner with our clients without judgment and resist the temptation to deny that AI isn’t already a part of their world.

A Call to My Peers

So here’s my modest proposal, as peers and colleagues: make inquiries and discussions about chatbot use part of your standard practice. Revise your intake questionnaires. Train your clinical team to normalize asking about it. Build it into your SOPs if you run a group. If you’re early in your career, establish the habit now, as it will serve you and your patients well.

The risks of not asking are too high. We may miss what’s fueling a patient’s crisis. We may lose the chance to help them reflect critically on their digital habits. And we may unwittingly leave them vulnerable to one of the most powerful, least regulated influences in their daily lives.

The good news is that the fix is simple: just ask. By doing so, we open space for patients to share, for us to educate and, if needed, intervene, and for our field to adapt.

In the age of AI companions, silence is not neutral. It’s neglect. Let’s all do better.


Elizabeth Carr

Dr. Elizabeth Carr is the founder of Kentlands Psychotherapy. In her current leadership role, she enjoys writing about the mental health sector, the current state of affairs, and the industry’s future direction. Visit our podcast appearance page to hear more about her thoughts on these issues and follow her on LinkedIn to join the conversation.

Interested in extending the conversation on your podcast or publication? Learn more about a possible fit via her media kit.

It's not an easy time to be a kid. Perhaps it never is, but now it's especially true. Let us help you understand and support your child's needs.

Is your teen struggling with school, in their relationships with friends, with you? Do they seem irritable, withdrawn, unmotivated, sad? Our therapist know how to help.

Are you looking to make some aspect of your otherwise good relationship better? Maybe you’ve tried all the strategies that make sense to you.