Scientists are hard at work creating bots, whether they are embodied or simply machine algorithms, that they hope can provide some of the same benefits as discussing one’s troubles with a close friend or therapist.
One example is the aptly named chatbot Woebot. Created by Stanford University AI experts with the help of psychologists, Woebot is designed to be a friend, therapist, and confidant. It is currently available through select health care providers and employers, and for people enrolled in ongoing studies. The AI bot checks in with you daily for a chat, tracks your moods, plays games, and curates videos for you to watch, all in the service of managing and improving mental health. It asks you questions such as “How are you feeling today?” and “What kind of mood are you in?” to prompt the kind of regular introspection that is a cornerstone of emotional intelligence.
The aim of Woebot’s creators is to not only provide daily contact and maintain mental health but to actually improve on the work of human counselors. The AI has a distinct advantage — you can tell it anything and the bot is incapable of judging you. Alison Darcy, one of the psychologists behind the development of Woebot, and the founder and CEO of Woebot Health said, “There’s a lot of noise in human relationships. Noise is the fear of being judged. That’s what stigma really is.”
And stigma is what we all wish to avoid when confessing our innermost secrets to another living, breathing human being, whether it’s in the context of a therapeutic relationship or over a glass of wine. It’s the essence of inhibition, and this fear might discourage the most disturbed among us from ever confiding, and thus confronting, their deepest concerns and issues. However, that which one is afraid to share with another person one might confess to a robot, thus bringing the issue into the light of consciousness.
The style of therapy that Woebot is designed to utilize is cognitive behavioral therapy — a popular form of psychotherapy that challenges negative patterns of thought about oneself and the world. CBT has shown considerable success in treating conditions like anxiety and depression. The bot’s approach is meant to illuminate the inner landscape so that issues can be explored and perhaps reframed in more productive ways.
A daily check-in is something that a human companion may not think to do, or may do but not follow up on with further questions that challenge one’s negative thinking. Woebot’s creators say he doesn’t come up with profound insights to tell you something you didn’t know about yourself; he facilitates that process so that you can come up with your own insights — exactly what a CBT therapist hopes to do, only the very possibility of judgment is out of the equation.
From the standpoint of demand, Woebot’s creators say it has been incredibly successful. I spoke with Darcy about her brainchild, and she described her surprise at how quickly Woebot, when first released, gained popularity. One evening she came home from work, sat down at the kitchen table, and realized that Woebot had talked to more people in a few months than most clinicians see in an entire lifetime.
She attributes the phenomenon to the fact that Woebot has been endowed with a personality and a backstory that draws people in. “He has robot friends and a personality,” she told me, “sort of like an overconcerned stepdad.” And yes, Woebot simulates empathy, a skill that Darcy believes is integral to social robots. All the questions Woebot asks his “patients” are written by psychologists to help them home in on their feelings and to challenge negative, unproductive patterns of thought.
Darcy set out to design an online therapy program to fill in the gap between those who feel depressed and anxious but, for whatever reason, don’t or can’t go see a human therapist. “Depression is the leading cause of disability in the world,” she told me, yet large swaths of the depressed population are underserved because they live in places with few to no therapists, because they can’t afford therapy, or because they feel the stigma of seeing a human therapist is too daunting.
I asked Darcy what happens when someone engaged with Woebot sends signals of serious troubles, perhaps suicidality. Wouldn’t a human therapist be best in such a situation, perhaps initiating medication or even hospitalization? Although she assured me there was a “safety net” in the program that would immediately refer the patient to resources that can intervene, Woebot could still fall short, in my opinion, when a real crisis is developing.
Another issue that I find troubling is the question of data privacy. After all, the information users might share with an online therapist is likely to be among the most intimate and sensitive, and even the most informed security experts will tell you, true online privacy has so far been an elusive goal.
While I regard the privacy issue as a stumbling block for many people before they will engage with an online therapist, I believe that programs such as Woebot can help fill in the gaps in a mental health system that fails to reach millions of troubled people, many of whom suffer in silence out of an inability to pay or the fear of stigma.
In addition, even the best human therapist can’t possibly check in every day with every one of his patients. There’s no reason why such programs can’t be incorporated into social, consumer robots, where they can help users maintain good mental health. Online bots show every sign of being, for many people, their choice for addressing issues of mental health.
There’s research to back up the theory that people will share things with robots that they won’t tell a human therapist. The Defense Advanced Research Projects Agency, or DARPA, is deep into robotics research, and in 2014, they studied people’s interactions with a virtual therapist named Ellie.
Ellie is an avatar developed by the University of Southern California Institute for Creative Technologies, and they put it to work testing 239 human subjects divided into two groups. One group was told it was talking to a bot, and the other group was told that there was a real person behind the avatar. DARPA’s interest in the project was aimed at assessing the use of an avatar/bot to treat soldiers with PTSD.
In the experiment, the participants who thought they were talking to a robot were “way more likely to open up and reveal their deepest, darkest secrets,” reporter Megan Molteni wrote for WIRED in 2017. “Removing even the idea of a human in the room led to more productive sessions.”
Molteni goes on to say that removing the “talk” from talk therapy actually helps people to open up as well. She explains: “Scientists who recently looked at text-chat as a supplement to videoconferencing therapy sessions observed that the texting option actually reduced interpersonal anxiety, allowing patients to more fully disclose and discuss issues shrouded in shame, guilt and embarrassment.”
While any progress in the evolution of psychotherapy will be good news for those struggling with mental health issues, one has to ask whether the preference for chatbots and texted therapy sessions are a sign of the type of alienation lamented by Sherry Turkle and other experts who study and write about emotional intelligence.
Might such interventions actually be harmful in the long run as our fundamental ability to relate to other people erodes over time? After all, isn’t one of the ostensible benefits of psychotherapy the process of overcoming inhibitions against opening up, allowing one’s vulnerability to come into the light, and growing more comfortable with our own and others’ vulnerability? And are shame and embarrassment always undesirable, or do they sometimes alert us to problematic traits in our mental lives that need examination?
Suppose a person was harboring violent impulses that caused some inner conflict. He may be more amenable to confessing such thoughts to a robot, but the act of discussing this conflict with a human therapist might nudge him to be more critical of the thoughts and to work to address them. A human therapist who advocates for regular progress may be what’s needed to help people persist in grappling with deeply entrenched problems without giving up when the going gets rough.
Companion robots may be programmed to address a whole range of mental illnesses and may play a part in keeping the healthy well. Engineers might program them to do things like suggesting pleasurable activities or activities that lead to greater social connection as a way to regulate moods in people with mood disorders such as bipolar disorder or depression. But can robots ever be programmed to always have an appropriate, context-related response?
Take the example of the social robot Pepper, seeking to play a favorite song if he notices a sorrowful expression on his owner’s face. Sorrow may be a highly appropriate emotion in some contexts. It should be recognized and examined with an eye toward how it might offer one genuine guidance in the consideration of a problem. A robot like Pepper that constantly prods us to cheer up, possibly when our emotions are completely appropriate, could be seen as being intrusive, patronizing — possibly even maddening.
It’s an open question whether robots will ever be emotionally intelligent enough to always respond to humans appropriately. The palette of emotionally intelligent reactions in human life is exceptionally broad. It may include the exercise of humor in one context and quiet listening in another. And it entails the harnessing of our innermost feelings to help us adapt to a well-nigh limitless number of situations and experiences.
It also requires us to know when to quiet our emotions and when to give them expression. It entails intuition as well as critical analysis and calls for the judgment to know which to rely upon in any given circumstance. The only way robots could reach this level of sophistication would be if they felt emotions themselves. Until they themselves have emotions, they will only be able to provide us with emotionally intelligent companionship up to a point.
These are just the sorts of challenges that roboticists, with the help of psychologists, are trying to overcome.
Eve Herold is an award-winning science writer and consultant who currently serves as director of policy research and education for the Healthspan Action Coalition. Her previous books include “Stem Cell Wars” and “Beyond Human,” and her work has appeared in The Wall Street Journal, Vice, The Washington Post, and The Boston Globe, among other publications.
I was recently taking a course on deception detections (https://pamelameyer.com/masterclass/) and I read that the Mayo Clinic says we’re living amid “an epidemic of lying.” I can only imagine that this will get exponentially more difficult as AI develops. AI can do alot of good and can be an amazing tool, but we need to be careful how we use it and put safeguards in place.