privacy_central

Smartphone therapy? Hello from iPhone
Privacy19 Nov 201810 mins read

Can you trust your AI therapist?

Chatbots designed to help with mental health conditions are on the up, but will these apps, classified as health and fitness rather than medical apps, keep personal data safe?

Natasha Stokes
Natasha StokesFeatures Editor

There’s an app for that – and that includes therapy, of course. Smartphone apps purporting to offer the comfort of a psychologist’s couch are on the rise, as startups around the world design tech to help with mental health issues such as anxiety and depression.

Many of these apps work via virtual conversation agents – a.k.a. chatbots – that analyse user responses to produce (usually) appropriate suggestions for how to deal with one’s emotions.

Woebot, for instance, is a smartphone app – or “your charming robot friend” – that checks-in daily for a short conversation about how users are feeling, tracking moods and offering quick lessons for managing feelings, based on the methods of cognitive behavioural therapy (CBT). Wysa – which calls itself your 4am friend and life coach – offers a similar message-based experience, via a cute penguin avatar that asks users how they’re doing, then offers a combination of breathing exercises, mindfulness exercises and CBT techniques to deal with the current mood. Both apps are free, although for $29.99 a month, Wysa additionally offers messaging with a real-life coach.

Wysa Cognitive Restructuring

Artificial intelligence has been hailed as a potential solution to the global need for affordable, reliable mental healthcare. One in four people will be affected by a mental health condition at some point during their lives, while two-thirds of the 450 million people with a mental disorder never seek treatment, whether because of cultural stigma, a lack of services, or treatment that is too costly.

A smartphone chatbot could be a handy answer to all three issues.

“Computers have strengths that humans don’t – they’re available 24/7, their voice is consistent, they don’t have personal prejudices, and they are good at finding patterns in large volumes of data,” says Dr. , clinical psychologist, researcher and co-founder of AI-Therapy, an online CBT program for social anxiety.

Less private than therapy?

One potential issue is that the majority of chatbot-based therapeutic apps are technically categorised as health and fitness apps. This means that their claims do not need to be approved by regulators, and data protection is not necessarily held to the same standards as typical medical records.

Indeed, Woebot is careful to flag exactly what it offers in one of its first messages to a new user: “I don’t do open-ended conversation and I don’t do therapy either. My super smart creators made sure that I adapt to what you say.”

This transparency is reassuring – but it’s worth noting that along with its Android and iOS apps, Woebot is also available on Facebook Messenger, notorious for scraping call and SMS data. For its Android and iOS service, Woebot’s privacy policy states that “conversations are not shared with any other company or service. We will never sell or give away your personal data or conversation history.” However, regarding its Messenger service, the policy makes its clear that “Facebook can see that you are talking to Woebot, and they can see the content of the conversations.”

While it’s unclear whether the Woebot apps are HIPAA-compliant, the US regulatory framework for securing healthcare data, the company says it is compliant with the European GDPR – messages exchanged with Woebot are encrypted in transmission and stored encrypted; users’ data is anonymised; and users are able to request and delete their data.

Woebot: Everything is anonymous

 

Wysa stores no personally identifiable information and like Woebot, its privacy policy states it will never provide personal information to third-parties without explicit consent; non-personal information relating to how people use the app is shared to third-party analytics software. Additionally, Wysa allows users to set a PIN to secure their conversations from the prosaic privacy issue of someone picking up their phone and reading their messages.

“Users should certainly look for explicit confirmation that such a program is not going to be sharing your data with a third-party,” says Helgadóttir. “As well, who created the program? You pay for your privacy in a way, so if the program is for free, I would be extra aware of potential problems.”

Take Replika, an AI-powered chatbot for Android and iOS intended for people “feeling overwhelmed, anxious, or [who] just need someone to talk to”. The bot, which has 2.5 million users, learns speech patterns to eventually model itself after its user. In its FAQ, the company says that it does not sell or disclose personal information, nor serve ads, and though the app is currently free, it will eventually move to a freemium model with additional paid-for features. However, its privacy policy details that the processing of personal data includes its use in “remarketing and behavioral targeting” by both the application itself and its partners, which indicates that users can receive personalised ads across the internet based on their behaviour within Replika.

Even paid-for services can run into potential privacy snags if they’re accessed through less secure platforms. Tess.ai is a mental health chatbot “who delivers emotional wellness coping strategies”. It’s HIPAA-compliant, but only on the company’s own platforms: x2.ai, tess.ai and karim.ai. If users communicate with Tess via SMS or WhatsApp – and this convenience is perhaps part of the attraction for its four million paying users – the company does not guarantee the same protection for user data.

The effectiveness of human-free interaction

The conversational abilities of today’s chatbots aren’t top-notch yet – telling Woebot about a racing heart and impending doom, for example, prompts the bot to ask what type of problem this could be categorized as – but the CBT techniques they espouse have been proven effective at helping patients rationally think through emotions and thought patterns.

Wysa Thoughtpad mental health conversation

Many studies have found that CBT delivered over the internet, including via apps, offers numerous benefits for patients, in part because they can access treatment whenever they like, with the anonymity that can especially benefit anxious or depressed patients.

Though research is thin on the ground for the efficacy of the chatbot apps for therapy, especially for a particular program’s methods, the field is a young and fast-developing one. Wysa is putting out a call on its website for partners to run research trials, while a small study of Woebot users found that depression symptoms were reduced over a two-week period.

With the need for mental health services continuing to outstrip its availability in various parts of the world, a greater focus on researching effective applications of CBT in AI-powered apps and online programs could build an effective complement to existing treatments.

The role of technology in health is already expanding – the first medical-grade wearables are emerging for commercial use, while the developer of the ultra-popular meditation app Headspace is currently working on prescription-grade mindfulness tools to launch in 2020.

These could pave the way for tech innovators in the mental health field to seek regulatory approval for their products, rather than operating in the health and fitness category. For anyone needing support with their mental health, an AI therapist is always in – next up is for the doctor to be regulated and privacy-assured too.