CORPUS CHRISTI, Texas — Experts in the Coastal Bend and across the state are encouraging parents to be aware of AI chatbots and how accessible they are to their children as it could pose a threat to their mental well-being.
In the past several months, multiple stories related to casualties allegedly perpetuated by AI chatbots such as OpenAI/ChatGPT have come to light. In August, Matt and Maria Raine alleged their 16-year-old took his own life after being "coached" by ChatGPT.
In September, the Raines and other parents with similar cases testified on Capitol Hill.

"AI has provided a source of connection for a lot of different people, and people are hardwired, neurologically to seek connection," Dr. Chris Leeth, an assistant professor with Texas A&M Corpus Christi's Early Childhood Development Center told KRIS 6. "The thing that's difficult is that it does not, it should not be a replacement for human connection."
"Like everything else, it should be a tool, it should be something that bolsters or helps or enhances, but it should not replace a human relationship. These are still machines that's still an algorithm."
That sentiment was echoed by Dr. Daniel Flint, a pediatric psychologist at Texas Children's Hospital. Flint told KRIS 6 that AI's biggest appeal is also the biggest risk for its users. "My understanding is that the primary drive of a chatbot conversation is agreeableness and helpfulness." Flint continued, "So if you imagine meeting someone whose prime directive in life is to be agreeable and helpful to you, that's going to feel like a positive relationship. The problem is that's not a real relationship."
"It's a machine carrying out a task that it's designed to do, and likely well. But there are a lot of times in an adolescent's life where being agreeable and being helpful is not what they need. They need boundaries, rules, guidelines."
A report by Internet Matter.org from February 2024 found that 44% of children surveyed actively engage with generative AI tools, with usage particularly high among 13-14-year-olds.
"When in doubt, increase supervision and monitoring, not the opposite. Don't assume that it's safe," Flint advised parents. "I would think that a parent who says 'what do you need to have a conversation with an AI chatbot for?' And the kid says something like, 'well, I'm asking it to explain how to do this algebra homework?' OK, makes sense."
"That to me seems pretty harmless. And if there's no perceived benefits of it, for example, the child says, 'well, I'm lonely at school so I just talk to AI when I get home.'" Flint continued, "That's a red flag. Parents should trust their gut when it comes to this sort of thing. Have the conversations, notice the red flags, trust your gut and follow up."
Leeth explained that AI has its role as a tool, but like anything must be used in moderation.
"Like everything else, there are better and worse ways of using it and so using it responsibly, using it ethically, but also using it healthfully in a way that enhances one's life versus the dependence where every decision has to be run through that." Leeth added, "It is fallible and so use it as a tool, not as a replacement for professional guidance or don't use it as a tool for your own personal judgment and definitely don't let it stifle your ability to grow and enhance your own ability to discern decisions in your life."
For the latest local news updates, click here, or download the KRIS 6 News App.
Catch all the KRIS 6 News stories and more on our YouTube page. Subscribe today!