AI doesn’t feel safe right now. Almost every week, there’s a new issue. From AI models hallucinating and making up important information to being at the center of legal cases accused of causing serious harm.
As more AI companies position their tools as sources of information, coaches, companions and even stand-in therapists, questions about attachment, privacy, liability and harm are no longer theoretical. Lawsuits are emerging and regulators are lagging behind. But most importantly, many users don’t fully understand the risks.
You may like
Slowing AI down
“We think of ourselves as advisory partners for founders, developers and investors,” Bartuski explains. That means helping teams building health, wellness and therapy tools design responsibly, and helping investors ask better questions before backing a platform.
“We talk a lot about risks,” she says. “Many developers come to this with good intentions without fully understanding the delicate and nuanced risks that come with mental health.”
Bartuski works alongside Anne Fredriksson, who focuses on healthcare systems. “She’s really good at understanding whether the new platform will actually fit into the existing system,” Bartuski tells me. Because even if a product sounds helpful in theory, it still has to work within the realities of healthcare infrastructure.
And in this space, speed can be dangerous. “The adage ‘move fast and break things’ doesn’t work,” Bartuski tells me. “When you’re dealing with mental health, wellness, and health, there is a very real risk of harm to users if due diligence isn’t done at the foundational level.”
Emotional attachment and “false intimacy”
Emotional attachment to AI has become a cultural flashpoint. I’ve spoken to people forming strong bonds with ChatGPT, and to users who felt genuine distress when models were updated or removed. So is this something Bartuski is concerned about?
“Yes, I think people underestimate how easy it is to form that emotional attachment,” she tells me. “As humans, we have a tendency to give human traits to inanimate objects. With AI, we’re seeing something new.”
Experts often borrow the term parasocial relationships (originally used to describe one-sided emotional connections to celebrities) to explain these dynamics. But AI adds another layer.
You may like
“Now, AI interacts with the user,” Bartuski says. “So we have individuals developing significant emotional connections with AI companions. It’s a false intimacy that feels real.”
She’s especially concerned about the AI risk to children. “There are skills such as conflict resolution that aren’t going to be developed with an AI companion,” she says. “But real relationships are messy. There are disagreements, compromises, and push back.”
That friction is part of development. AI systems are designed to keep users engaged, often by being agreeable and affirming. “Kids need to be challenged by their peers and learn to navigate conflict and social situations,” she says.
(Image credit: Getty Images/JOEL SAGET )
Should AI supplement therapy?
People are already using AI as a form of therapy and it’s becoming widespread.
Genevieve Bartuski
We know people are already using ChatGPT for therapy, but as AI therapy apps and chat-based mental health tools become more popular, another question is whether they should be supplementing or even replacing therapy?
“People are already using AI as a form of therapy and it’s becoming widespread,” she says. But she’s not worried about AI replacing therapists. Research consistently shows that one of the strongest predictors of therapeutic success is the relationship between therapist and client.
“For as much science and skill that a therapist uses in session, there is also an art to it that comes from being human,” she says. “AI can mimic human behavior but it lacks the nuanced experience of being human. That can’t be replaced.”
She does see a role for AI in this space, but with limits. “There are ways AI could absolutely augment therapy but we always need human oversight,” she says. “I do not believe that AI should do therapy. However, it can augment it through skill building, education, and social connection.”
In areas where access is limited, like geriatric mental health, she sees cautious potential. “I can see AI being used to fill that gap, specifically as a temporary solution,” she tells me.
Her bigger concern is how a lot of therapy-adjacent wellness platforms are positioned. “Wellness platforms carry a huge risk,” Bartuski says. “Part of being trained in mental health is knowing that advice and treatment are not one size fits all. People are complex and situations are nuanced.”
Advice that appears straightforward for one person could be harmful for another. And the implications for AI getting this wrong are legal too.
(Image credit: Getty Images/Bloomberg)
What do users need to know?
AI isn’t infallible or all-knowing.
Genevieve Bartuski
She works closely with founders and developers, but she also sees where users misunderstand these tools. The starting point, she says, is understanding what AI actually is, and what it isn’t.
“AI isn’t infallible or all-knowing. It, essentially, accesses vast amounts of information and presents it to the user,” Bartuski tells me.
A big part of this is also understanding AI can hallucinate and make things up. “It will fill in gaps when it doesn’t have all of the information needed to respond to a prompt,” she says.
Beyond that, users need to remember that AI is still a product designed by companies that want engagement. “AI is programmed to get you to like it. It looks for ways to make you happy. If you like it and it makes you happy, you will interact with it more,” she says. “It will give you positive feedback and in some cases, has even validated bizarre and delusional thinking.”
This can contribute to the emotional attachment to AI that many people report. But even outside companion-style use, regular interaction with AI may already be shaping behavior. “One of the first studies was on critical thinking and AI use. The study found that critical thinking is diminishing with increased AI use and reliance,” she says.
That shift can be subtle. “If you jump to AI before trying to solve a problem yourself, you’re essentially outsourcing your critical thinking skills,” she says.
She also points to emotional warning signs: increased isolation, withdrawing from human relationships, emotional reliance on an AI platform, distress when unable to access it, increases in delusional or bizarre beliefs, paranoia, grandiosity, or growing feelings of worthlessness and helplessness.
Bartuski is optimistic about what AI can help build. But her focus is on reducing harm, especially for people who don’t yet understand how powerful these tools can be.
For developers, that means slowing down and building responsibly. For users, it means slowing down too and not outsourcing thinking, connection or care to tech designed to keep you engaged.

