Dutch DPA: ‘AI-based companion apps can be misleading and addictive’

The Dutch data protection authority (DPA) is warning users to be alert for misleading, addictive and possibly harmful companion apps.
De Autoriteit Persoonsgegevens has analysed nine so-called companion or therapeutic apps. These apps are offered in various app stores and act as a virtual friend, therapist or life coach. They are often downloaded because they can help dealing with mental problems when professional help isn’t available.
According to the Dutch DPA, companion apps are becoming increasingly popular. In the Netherlands they are on top of the list of the most downloaded apps. Most of these apps feature artificial intelligence (AI). AI technology has evolved so much over the recent years, making it very hard to distinguish AI-generated conversations from ones from real people.
However, that’s not the only issue. The privacy regulator states that most apps for virtual companionship and mental health therapy provide unreliable information and can even be harmful at times. Additionally, the chatbots contain addictive elements, pretend to be real people, and can even pose a danger to vulnerable users dealing with a mental breakdown.
Many of these apps are unable to extract nuances from the text of written conversations. Also, the chatbots have been tested in English and their answers in Dutch aren’t nearly as good as the ones based on the underlying language models. And during moments of mental crisis, the companion apps don’t or hardly advise users to seek professional help.
“These chatbots must be crystal clear to users that they are not talking to a real person. People must know where they stand, as is their right. Privacy legislation requires apps to be transparent about what happens to the sensitive personal data that users share in the chat. And soon the AI regulation will also require chatbots to be transparent about the fact that users are dealing with AI,” Chairman of the Dutch DPA Aleid Wolfsen says in a statement.
The providers of many of these apps are commercial companies. Their goal is to make a profit and to gain access to personal information through conversations. In some cases, users are offered subscriptions, products or extras, such as virtual outfits for their characters or access to other chat rooms. During conversations about mental health issues, users may also encounter paywalls. Then they have to pay to continue the conversation.
The privacy regulator concludes that the current generation of AI-driven companion apps are insufficiently transparent, unreliable, and risky in crisis situations.
“We are deeply concerned about these and future hyper realistic applications. We therefore focus on awareness and responsible use of AI,” Wolfsen told Dutch news outlet RTL Nieuws.
The Autoriteit Persoonsgegevens won’t disclose what nine apps have been investigated. We do know that ChatGPT wasn’t one of them, because it’s considered a general chatbot and therefore not a companion app.
Your email address will not be published. Required fields are marked