We noticed you’re blocking ads

Thanks for visiting CRSTG | Europe Edition. Our advertisers are important supporters of this site, and content cannot be accessed if ad-blocking software is activated.

In order to avoid adverse performance issues with this site, please white list https://crstodayeurope.com in your ad blocker then refresh this page.

Need help? Click here for instructions.

Up Front | May/June 2024

Ethical and Social Implications of AI-Powered Companions

Have you ever felt that your AI chatbot understands you too well, as if it were human? Whether it’s ChatGPT (OpenAI), Bard (Google), Microsoft Copilot (Microsoft), or an open-source option such as Mistral (Mistral AI), the advanced models are exceptionally proficient at engaging in conversation and addressing inquiries.

Platforms such as Character.AI (Character.AI) allow users to have their chatbot adopt the role they choose, from Socrates to Napoleon Bonaparte, which makes conversations incredibly realistic. A constant reminder on the app’s screen states, “Remember: everything the characters say is made up!” More than 20 million people worldwide, including more than 4 million monthly active users in the United States, engage with these AI characters for an average of 2 hours daily. Another application, Replika (Replika), is marketed on its website (replika.com) as an “AI companion who cares,” encouraging users to develop close relationships with the characters it generates—and to pay for the privilege.

AI entities may also feature avatars, which are visual representations that respond to user interactions, such as their facial expressions or gestures. For example, Soul Machines (Soul Machines) in New Zealand developed a digital version of rapper Mark Tuan that reacts to users’ smiles or dances during conversations.1

The concept of digital companions is not new. The first chatbot, Eliza, was developed in the 1960s by AI researcher Joseph Weizenbaum. Even then, interactions between humans and bots quickly became personal.2 Today, these entities are referred to as digital humans. At the Georgia Institute of Technology in Atlanta, Professor Larry Heck and his team are developing a digital human they dubbed an AI virtual assistant (AVA). Their goal is to enable a digital human to interact in virtual or augmented reality by responding to users’ gestures, body language, and facial expressions. AVA, powered by large language models, engages in fluid conversation, which encourages the user to teach this digital human new skills. AVA’s long-term memory allows this virtual assistant to engage in consistent behavior, raising the following question: Are entities like this one merely sophisticated machines or something more?

AI technology can be beneficial, such as in hospitals where robots combined with chatbots can safeguard child welfare. If their parents visit frequently, a child can benefit from play with the robot between their visits.3 The development of digital humans, however, raises numerous ethical concerns. Transparency about the capabilities and limitations of virtual assistants is crucial, and users must maintain control. Some people, however, become deeply immersed in their interactions with chatbots; in Belgium, a case of suicide was linked to conversations with an AI bot on the Chai app (Chai AI).4

Ultimately, each bot’s development reflects a specific educational philosophy or concept of the ideal partner. As these entities become better at interpreting human emotions, their interactions should improve, but their potential ability to manipulate users could increase. The line between service and consumer manipulation is thin, necessitating clear communication that a bot is not human. Otherwise, the interactive nature of these platforms could cause users to forget the distinction over time.

Europe is preparing AI regulations to address known risks, but the application of these rules remains to be seen. Large corporations will likely leverage legal resources to ensure their technologies, including digital humans, are not classified as high-risk.

A broader social discussion is necessary—one that balances theoretical and practical considerations. Traditional institutions like the press play a vital role, and there is a call for a new institute that bridges the gap between politics and technology, one that encourages citizen participation and expertise exchange.

To be continued.

ERIK L. MERTENS, MD, FEBO, FWCRS | CHIEF MEDICAL EDITOR
Physician CEO, Medipolis-Antwerp Private Clinic, Antwerp, Belgium

1. Soul Machines. Deliver 24/7 connection with a Digital Celebrity. Accessed May 7, 2024. https://www.soulmachines.com/celebrity-partners

2. Natale S. The ELIZA effect: Joseph Weizenbaum and the emergence of chatbots. In: Deceitful Media: Artificial Intelligence and Social Life After the Turing Test. Oxford Academic; 2021.

3. Moerman CJ, van der Heide L, Heerink M. Social robots to support children’s well-being under medical treatment: a systematic state-of-the-art review. J Child Health Care. 2019;23(4):596-612.

4. Xiang C. He would still be here: man dies by suicide after talking with AI chatbot, widow says. Vice. March 30, 2023. Accessed May 7, 2024. https://www.vice.com/en/article/pkadgm/man-dies-by-suicide-after-talking-with-ai-chatbot-widow-says

NEXT IN THIS ISSUE