In 2013, authors Rutherford
and Roose [American Journal of Psychiatry,
170 (7), 723-733] wrote a paper that discussed
the results of a previous study that had found that the placebo (inactive "sugar" pill) response rates in random clinical
trials (RCT's) of antidepressant medication had
risen at a rate of 7% per decade over the past 30 years. Consequently, the
average difference between active medication and placebo observed in published
antidepressant trials decreased from an average of 6 points on the Hamilton
Rating Scale for Depression (HAM-D) in 1982 to only 3 points in 2008.
Now the lead author of that paper and his colleagues have found something similar going on in RCT's between 1960 and 2013 of anti-psychotic medications for schizophrenia (the findings were published on line in October in JAMA Psychiatry). Most interestingly, in the 1960's, patients who received the placebo in such studies actually got worse on them. By the 2000's, however, they were getting better on placebo.
Even more striking, the average RCT participant receiving an effective dose of medication in the 1960s improved by 13.8 points on the Brief Psychiatric Rating Scale (BPRS), whereas this difference diminished to 9.7 BPRS points by the 2000's.
What the heck is going on here?
Are the medicines somehow becoming less effective than they used to be?
In their article from 2013, Rutherford et. al. try to explain this by looking at
such things as expectancy (what do subjects think is going to happen with their
symptoms), a statistical phenomenon known as
regression to the mean (see this post for a definition), the amount of
contact subjects have with the study doctors, the social desirability of
certain responses, and the “Hawthorne Effect” (subjects
in an experiment improve or modify the aspect of their behavior under study
simply by virtue of knowing that the behavior is being measured).
While
the expectations of the average John Q. Citizen that antidepressants will work
may have increased somewhat over the decades because of such things as celebrities describing their
experiences with depression or commercials for Cymbalta and Abilify, there has
also been a lot of negative information on that same score on television shows like Sixty Minutes and from the anti-psychiatry
rants of Scientologists and others.
Whether these two influences completely cancel each other out is debatable, but I think it is safe to say that many of these possible reasons for a change in placebo response rate advanced by authors have in fact not changed significantly since the 1960's. In fact, if people in the 60's didn't think antidepressants would work, expectancies would have been lower, not only in the placebo group, but in the active treatment group as well.
Whether these two influences completely cancel each other out is debatable, but I think it is safe to say that many of these possible reasons for a change in placebo response rate advanced by authors have in fact not changed significantly since the 1960's. In fact, if people in the 60's didn't think antidepressants would work, expectancies would have been lower, not only in the placebo group, but in the active treatment group as well.
If certain factors that affect placebo response rates have not
changed a lot, then those factors can not explain the rise in the placebo response
rates
The authors also
mention a couple of factors that I
believe to be more on the mark: first, that assessments of eligibility for a
clinical trial may be biased toward inflated symptom reporting at the beginning
of the study - when investigators have a financial incentive to recruit patients - and second, that most research participants in the 1960s and 1970s were
recruited from inpatient psychiatric units, whereas current participants are
symptomatic volunteers responding to advertisements.
The biggest change in research with RCT's over
the period in question is that many studies are no longer done in medical
schools, but by private entities called Contract Research Organizations (CRO's). The doctors who run the studies are paid for
each subject they recruit, and subjects only get paid if they are recruited.
This means that there are not just one set of financial incentives for everyone to
exaggerate their symptoms at the beginning of a study, but two! This tendency will lead to a
higher placebo response rate because after they are recruited, subjects no
longer have an incentive to exaggerate their symptoms. So they seem to get better.
It is very easy to bias a research diagnostic interview.
I'll get to that in a minute, but first a digression.
I was fortunate to train at a time when
patients could be kept in the hospital for several months if necessary, so we
got to see the patients in depth over a considerable time period, and could watch medication responses. People who have trained more recently do not see this any more. Antidepressant responses
clearly took a minimum of 2 weeks - and
then only if the patient responded to the very first drug given at the first dose
given.
Because most patients do not understand this, the doctor can usually discriminate a placebo response from a true response by observing when the patient starts to get better combined with the rate at which they improve. Since subjects don't know what to expect, being on this timeline could not be due to the expectancy factor, which in turn is necessary for a having a good placebo response.
Because most patients do not understand this, the doctor can usually discriminate a placebo response from a true response by observing when the patient starts to get better combined with the rate at which they improve. Since subjects don't know what to expect, being on this timeline could not be due to the expectancy factor, which in turn is necessary for a having a good placebo response.
I can tell you that a severe, properly diagnosed melancholic depression almost never showed a significant placebo response. The placebo response rate was probably about the same as the placebo response rate to a general anesthetic.
Another thing we observed was that patients with an acute schizophrenic reaction did not seem to get any better at all with such things as additional contact with doctors, which might be expected if a placebo response were taking place. In fact, the more you spoke with them, the more likely it would be that you would hear evidence that patients had a significant thought disorder than if you just had a briefer, casual conversation with them.
A thought disorder is at least as important as delusions and hallucinations in showing that someone is, in fact, someone with schizophrenia. People with a thought disorder see relationships between things that are completely illogical (loose associations). For example, the first patient I ever saw with schizophrenia in medical school believed that everyone who wore oxblood-colored shoes was a descendant of George Washington.
Huh?
Anyway, back to the question of biasing diagnostic exams. This is particularly easy when diagnosing a clinical depression. It is important to distinguish them from those people who are merely chronically unhappy. People with a clinical major depression, especially with so-called melancholic features, are a very different breed of cat.
The symptoms of both disorders do overlap a bit, so there are some cases in which it is really hard to tell one from the other. However, in the majority of cases it is a fairly easy call. It is, provided you do a complete psychiatric assessment, over several days, to see if a symptom of depression meets the requirement known as the Three P's:
The symptoms need to be pervasive (they do not go away depending on what the patient is doing at a particular time), persistent (lasting almost all day every day for at least two weeks), and pathological (the patients symptoms and functioning differ to a highly significant degree from the patient's usual state). In addition to the three p's, all of the patient's symptoms have to always occur simultaneously.
These
types of characteristics do not usually show up on the type of symptom checklists used to assess patients in clinical
trials, because the checklists are mostly based on a patient's self report. Unfortunately, the majority of people do not
know the difference between a clinically significant symptom and one that is
not. The rate of false positive responses on checklists is staggering.
Many
studies instead use something that is called a semi-structured diagnostic interview (such as one called a SCID) to
make a diagnosis. It is called
semi-structured because it tells the examiner to ask certain questions exactly as
they are posed verbatim. However, the examiner is
then free to ask any follow-up questions needed to clarify the clinical
significance of any symptom the patient reports.
No comments:
Post a Comment