For One Mental Health Approach, Success Is Hard to Measure
Mental health officials have long had a solution for treating those who are unable or unwilling to care for themselves. It’s called assisted outpatient treatment, or AOT, and it allows states, through a court order, to require mental health services — including medication — for individuals who refuse voluntary treatment and present a harm to themselves or others.
In the three decades since its introduction, forty-five states and the District of Columbia have implemented some version of AOT, and proponents say the policy has been successful. Studies have shown, for example, that when AOT is implemented well, it reduces hospital readmissions, lowers arrest rates, and saves the mental health care system lots of money.
But critics of AOT have called much of this research into question, arguing that not all such programs are created equal. In the best cases, patients undergoing AOT receive a caseworker, constant vigilance, and access to a suite of mental health resources. In the worst cases, critics say, AOT patients are booked into the system with little in the way of follow-up or oversight.
The differing viewpoints on AOT were brought into stark relief after publication of a study earlier this year by researchers at UCLA who had sought to shed light on disparities among programs. While some states have seen success with AOT, they wondered, is the program effective overall? They found that the implementation of AOT is uneven both across and within states, and that it in many cases, the approach is plagued by “inadequate resources, lack of enforcement power, inconsistent monitoring, and weakness of interagency collaboration.”
The survey, based on phone calls with mental health agencies across the United States, found that many states did not even have active AOT programs, despite having a statute governing the treatment’s use, and in many cases, legislatures hadn’t made funds available for implementation. States like California and Nevada each were treating fewer than 100 patients, while New York, which is frequently cited as having one of the country’s most successful AOT programs, is treating over 3,000 and spending $32 million a year doing so.
This month, the authors of the UCLA analysis came under fire from Jeffrey Geller, a professor of psychiatry at the University of Massachusetts Medical School, and Brian Stettin, policy director at the nonprofit Treatment Advocacy Center in Arlington, Virginia. In an letter published in the journal Psychiatric Services, Stettin and Geller called the UCLA study “an inaccurate and misleading set of findings, based on verbal, unverified claims of the interviewees.” A proper survey, they argued, would require much more elaborate ground-truthing in states with AOT laws, and it would include independent verification of the assessments made by survey participants.
The critique comes from experience: Stettin and Geller ran a similar survey of their own in 2014. “What we find is in many states you have spots that have successful AOT programs,” Stettin said in a phone call, “and then you have large swaths that don’t do anything.”
In a reply letter published alongside Geller and Stettin’s critique, the UCLA researchers — Marcia Meldrum, Erin Kelly, and Joel Braslow — admitted that their analysis could have been more comprehensive. But they also said the study “highlighted the lack of reliable, accurate data that are needed fully to assess the effectiveness of programs such as AOT in many states.”
In a phone call with Undark, Braslow elaborated. “Researchers end up defending a treatment based on an unstated assumption of what they see as the nature of illness and what the goal of treatment should be,” he said. “I think AOT is a response that isn’t actually addressing the fundamental structural problem.”
Whether or not that’s true, the recent exchange of viewpoints on AOT underscores just how difficult a task it is to empirically measure the implementation of mental health programs across the country. Because each program provides a different set of services, and because those services differ from place to place, it’s almost impossible to measure success in any standardized way. That was the conclusion of two Duke University researchers publishing in the same journal two years ago. “The search for a definitive and generalizable randomized trial of outpatient commitment may be a quixotic quest,” they wrote. “After three generations of studies and evaluations and systematic reviews of the evidence for outpatient commitment, there is yet little agreement about whether it works.”
It’s a familiar problem across mental health services: Researchers and policymakers disagreeing over how to measure a treatment’s effectiveness — in large part because mental illness is anything but standard. Every patient is different, as are treatment resources and administration protocols across all jurisdictions — and for his part, Stettin remains skeptical that a comprehensive analysis of AOT implementation could ever be achieved with a simple survey.
“I would love for such a thing to exist,” he said. “[But] you’d have to go county by county through the U.S. and actually see what’s on the ground.”