Pages

Thursday, March 14, 2024

The Pervasive Weaknesses of Psychotherapy Studies


A psychiatrist with Intense, bulging eyes by C. Josef, CC Attributions 4.0

 

In my last post, I mentioned that the research into both psychotherapy outcomes and personality disorders is extremely weak, and even that characterization may even be giving it too much credit. Extensive clinical experience has been dismissed as “anecdotal,” even when therapists see the same things over and over again and their observations are confirmed by many other therapists who actually look at the same phenomena.

The irony here is that almost ALL of the “research” data in these two areas is a collection of anecdotes, since they are entirely based on patient self-report or the experimenters’ personal observations – all of which are subject to significant bias. We cannot read minds and people act and lie a lot, and a lot of other influences on the “data” are extant and unknown to the researchers.

Most psychotherapy outcome studies are characterized by frequent patient drop-outs and by the fact that a significant portion of the study subjects do not respond to the treatment being offered. And outcome measures in these studies are typically the relief of symptoms, not changes in the patient’s abilities to love, work, and play successfully. And the subjects are rarely followed up for a significant period of time to see if any results that are attained last. A significant portion of the study “gains” are often lost after a year or so.

There are over 200 different models for understanding psychopathology and doing psychotherapy, although most are variations of the five major models: psychodynamic, cognitive, behavioral, affect focused, and family systems. Most therapists borrow techniques from schools other than the one they were trained  in.

When results from several different studies using different schools are compared, most tend to come out with about the same success rates. In the beginning of a movement to try to integrate the different schools, this was known jokingly as the Dodo Bird verdict (after a character in Alice in Wonderland) – all have won and all must have prizes. And when two schools are compared in a single study, the school of person who is the lead author of the study comes out the winner in 85% of them (an allegiance effect). Bias, anyone?

Even then, when a certain percentage of the study subjects did respond to the “inferior” treatment, we don’t know whether or not they would have done well in the “better” treatment. Or if those who did not respond to the “better” one would have responded to the other treatment.

Over the years I have posted critiques of the “research” and in this post will summarize a bunch more of the points I made. If there is a whole post about them, I’ll include a link to the original.

A big one I mentioned in the last post: when a school of therapy is evaluated, the individual interventions which comprise them (of which there are quite a few) usually are not, so we don’t know which of them worked and which of them did not or were even counterproductive. Responses to the individual interventions are important to know about because, despite the use of treatment manuals supposedly insuring that all therapists in a study using a specific school are doing the same things, this is not possible. Subjects all respond differently to a given intervention. Therapists have to pick and choose which intervention will be used next. Also differing - with significant impact - is the way the intervention is presented: phrasing, body language, tone of voice etc.

Another major study weakness: Those that try to apportion causation of psychological behavioral syndromes to genetic vs. environmental influences use studies of twins raised apart. This type of study routinely over-estimates genetic contributions by assuming parents treat all their children alike, which is way off. Furthermore, they are looking at the end result of gene and environmental interaction (phenotype, not genotype) without any way to know how much of a given finding to apportion to each of them.

Most psychotherapy outcome studies exclude patients with more than one disorder, although a high percentage of patients have co-morbid affective and anxiety disorders as well as more than one personality disorder. The therapy will of course look more effective if you include only the easiest patients.

In studies of psychiatric symptoms which may occur in response to stress, reactions are evaluated without any reference to what the actual stresses were to which the subjects were responding. 

Confusion between correlation and causation is illustrated in such studies as those that attempt to determine the causes or the results of drug abuse. For example: Does marijuana cause poor school performance or the other way around - or is there actually a third factor which leads to both of them?

Differences in brain area size and functioning between different groups on fMRI scans are automatically interpreted as abnormalities. In fact, most differences are due to normal neural plasticity in response to changes in the environment.

In studying  the nature of the relationship between parents and children, No one can  precisely measure the nature of the relationship. These relationships are not constants but vary across time and situational contexts. Parents might be good disciplinarians when it comes to providing children with adequate curfews, for example, but terrible at allowing them to stay up all hours of the night. Furthermore, the disciplinary practices certainly change over time as the children get older. Second, how does a study even attempt to measure the tone of parenting practices? Third, oftentimes studies are based on parent self report. If a mother were abusive or inconsistent, how likely do these authors think she would admit to it, even if she were very self-aware, which obviously many people are not.

In some Cognitive Behavioral Therapy outcome sudies, therapy  is at times compared with "treatment as usual" —letting subjects get whatever other treatments outside of the study treatment that they chose to have, allowing good therapists and bad therapists, and good therapies and bad therapies, to essentially cancel each other out. Even so, the sizes of treatment effects are only small to moderate.  “Response” just meant there was some significant improvement in symptoms, not that the symptoms of the disorders actually went away. Rates for actual remission from the disorders were even smaller. A considerable proportion of study patients do not sufficiently benefit from CBT.

In epidemiological research into environmental risk factors for various psychiatric disorders, most studies try to measure the effect of a single environmental exposure on a single outcome—something that rarely exists in the real world. Individuals are exposed to environmental elements as they accumulate over time, so that one single exposure usually means very little. Exposure also is “dynamic, interactive, and intertwined" with various other domains including those internal to individuals, what individuals do within various contexts, and the external environment itself—which is constantly changing. Last but not least, each individual attributes a different, and sometimes changing, psychological meaning to everything that happens to them.

The difference between “cannot” and “do not:” Study are often characterized by lack of attention to subject motivation, and ignorance of the concept of “false self.” In one study, high-psychopathy participants showed atypical, significantly reduced neural responses in the brain on an fMRI to negatively-toned pictures under passive viewing conditions. However, this effect seemed to disappear when the subjects were instructed to try to maximize their naturally occurring emotional reactions to these same pictures!

Researchers mistake a high index of suspicion for an “inability” to correctly read the mental states of others.

Studies show that changing a parent’s behavior towards BPD children can make those with BPD better—but seem to ignore the possibility that their behavior apparently helped cause the disorder in the first place.

2 comments:

  1. "evidence" is whatever you choose to look at or not look at. Evidence can be found for anything. "Evidence based" is a charade that those in the philosophy of science superseded centuries ago,

    ReplyDelete
  2. Interesting article. I look at a picture of rape and I have a calm view to it and their is no pathology their. I feel empathy, sympathy and compassion for the victim. I have helped rape victims back into the real world to adjust. I spoke to them about how iit doesnt have to ruin the rest of your llife and it can be an incident that happened not a change of worldview forever. I know exactly what you mean about this article

    ReplyDelete