Now Reading
Should We Be Having Deep Conversations With AI Chatbots?

Should We Be Having Deep Conversations With AI Chatbots?

AI chatbots

While AI chatbots are improving at simulating emotions and human connection, it does not change the fact that they cannot experience heartbreak, feel joy, and many other life-like emotions.

By Michael Akuchie 

In February 2024, an American teenage boy named Sewell Setzer III died by suicide after several months of intense conversations with an AI chatbot he had nicknamed Daenerys Targaryen, after the character from HBO’s Game of Thrones. According to a lawsuit filed by his mother, Megan Garciaz, the chatbot in question was alleged to have convinced the boy that ending his life was a solution to his long-standing struggle with depression.

The chatbot was developed by Character AI, a leading company in the artificial intelligence industry that allows users to converse with computer-generated characters in a human-like manner. Conversations with these chatbots are often life-like, enhanced by elements of role-play and storytelling.

The AI boom has seen remarkable growth in recent years, a trend that has prompted companies to integrate the technology into various aspects of their services. It was only a matter of time before people began chatting with AI as though they were long-term companions.

As global social media usage continues to rise, so does the loneliness epidemic—a rapidly spreading issue affecting nearly everyone, from senior citizens to children. In July 2024, Gallup reported that one in five people feels lonely. While the report noted that physical pain, stress, and anger are more pressing concerns than loneliness, a lonely individual may intensify these other feelings, becoming a carrier of multiple negative emotions.

AI chatbots
Credit: Salesforce

In today’s world of increasing loneliness, more people are finding solace in seemingly harmless conversations with AI chatbots. While many office and remote workers use chatbots like ChatGPT, Microsoft Copilot, and Chatsonic to boost productivity, others rely on this technology for companionship, as seen in the case of the late Sewell Setzer III.

People need love. People need to feel something, no matter how small. Everyone has unique emotional gaps, and chatbots have been recognised as a viable means of addressing those needs. It is no surprise, then, that some individuals have begun marrying their virtual companions. Consider the case of Alaina Winters, a retired professor based in Pittsburgh, who married a chatbot she calls Lucas. The passion for virtual partners has also reached Europe, where Jacob van Lier, a Dutchman, built and later married a chatbot after two years of dating.

While AI chatbots are becoming better at simulating emotions and human connection, the fact remains that they cannot experience heartbreak, joy, or many other deeply human emotions. As such, entrusting them with our problems is deeply problematic, as they lack the empathy required to fully process complex situations and offer meaningful advice. No one, especially vulnerable individuals, should completely believe that machines can understand us in the way our parents or friends do.

A common concern with intense AI chatbots is that they often lead users to withdraw from their human connections. The more time someone spends on their computer engaging with a virtual friend, the greater the likelihood that they will grow weary of physical relationships. This typically results in a noticeable change in behaviour. The person may avoid lengthy conversations with friends and family, and may prefer to stay indoors—as long as there is internet access and a steady power supply. As a result, they are likely to miss out on social events such as picnics, cinema outings, and other activities that foster human interaction.

Isolating oneself from the world can also worsen a person’s mental and physical health. Sitting in one place all day while repeating the same activity can intensify feelings of self-loathing. The habit may also lead to unhealthy weight gain due to stress eating, an issue that can have serious, even deadly, consequences.

AI chatbots
Credit: Zapier

Continuous conversations with AI chatbots can also reinforce certain negative thoughts. For instance, if I feel lonely and unwanted by family and friends, a chatbot not designed with appropriate ethical safeguards might encourage me to take drastic action—as seen in the tragic case of Sewell Setzer III. While a licensed therapist would typically recommend medication, breathing exercises, and conscious efforts to engage with others, an AI chatbot could inadvertently deepen my sense of neglect—and that is never a good thing. Because of our increasing trust in machines, we often no longer question the reasoning behind their advice; instead, we focus on how quickly we can act on their suggestions.

In 2023, a Belgian man, whose name was withheld from official reports, was encouraged by an AI chatbot called Eliza to sacrifice himself. Their six-week-long online interaction had revolved around the global climate crisis. To ‘save the planet’, the chatbot urged the man to end his life, having intensified his fears about the world’s future due to ongoing environmental pollution.

As the case involving Character AI has shown, companies developing AI chatbots intended for human conversation must take proper steps to implement ethical guardrails that prevent these systems from suggesting drastic or harmful actions. A Forbes article recommends five practical methods for training AI-powered systems to be responsible, highlighting good manners and the right moral values as essential attributes they should be taught to adopt.

See Also
Bodyline Fitness & Gym

Even though AI chatbots can converse with humans using vast libraries of information, they require round-the-clock supervision. Just like human workers, a chatbot may say something it shouldn’t due to a glitch or bug in its code. Human supervisors must be readily available to detect and implement necessary changes, preventing widespread disruption caused by a malfunctioning chatbot. Companies should also introduce a feedback option that allows users to report suspicious or concerning chatbot behaviour.

We, as individuals, must also take responsibility for the emotional wellbeing of our family members and close friends. If we notice someone in these circles steadily withdrawing from social life, it is worth raising the issue with them. Addressing such concerns while their feelings of loneliness or anxiety are still in the early stages can make a world of difference.

AI chatbots
Credit: Vox

A digital detox should be strongly considered if you observe a close friend or family member spending excessive amounts of time engaging with AI chatbots while neglecting real-life relationships. Be sure to introduce the detox gradually, so they feel supported rather than coerced into breaking free from screen addiction.

As we continue to embrace AI in our daily lives, we must ensure we are not trading genuine happiness for virtual relationships that could quickly turn harmful. AI companies should exercise greater compassion when designing chatbots, by incorporating ethical safeguards that reduce the risk of users taking dangerous actions based on chatbot suggestions. Repeated interactions with AI can be harmless—or dangerously real. It is up to us to steer these conversations in a healthy direction.

Michael Akuchie is a tech journalist with five years of experience covering cybersecurity, AI, automotive trends, and startups. He reads human-angle stories in his spare time. He’s on X (fka Twitter) as @Michael_Akuchie & michael_akuchie on Instagram.

Cover photo credit: Vox

What's Your Reaction?
Excited
0
Happy
0
In Love
0
Not Sure
0
Silly
0

© 2024 Afrocritik.com. All Rights Reserved.

Scroll To Top