Once rockets go up
Who cares where they come down?
That's not my department
Says Wernher von Braun
~Tom Lehrer
This con job often involves multiple components that seem to be acting independently but are being co-ordinated behind the scenes, and employs very subtle mind tricks to shape the thinking of the physician. The influence techniques are also taught to their army of pharmaceutical representatives who visit doctors in their offices, and who also learn how to ingratiate themselves with the physician.
This post will focus on one of their techniques that I refer to plausible deniability. Plausible deniability is defined as the believable denial of a fact or allegation, or of previous knowledge of a fact. The term most often refers to the denial of blame for wrongdoing. In the case that illegal or otherwise disreputable and unpopular activities become public, high-ranking officials and academic physicians alike may deny any awareness of such acts or any connection to the agents used to carry out such acts.
The term became notorious during the arms-for-hostages Iran-Contra scandal in 1986. I use it here, however, to describe a strategy in which psychiatric "experts" who are paid directly by pharmaceutical companies advocate for non-FDA approved indications for brand-named drugs (the doing of which is supposed to be illegal) in such a way that they can deny that they are doing exactly that.
In my book, How Dysfunctional Families Spur Mental Disorders, I strongly critique the presentation of one particular study that purported to show that antidepressant medications, which are almost all going or have gone generic, are not effective in manic-depressive patients who are in the midst of a depressive episode. The study seemed to me to have mainly used subjects who had already failed at least one and perhaps two or three other antidepressant medications, making it far less likely that they would respond to the drugs used in the study.
Nowhere in the journal article describing the study does it say that it is a study of treatment-resistant bipolar depression, as opposed to a study of garden-variety bipolar depression. I figured it out by reading between the lines. Even though the study itself was not rigged, any experienced clinician could have easily predicted that the chances were excellent that it would turn out exactly the way it did. The journal article was extremely misleading because it did not disclose the true nature of the subject sample.
If antidepressants do not work in bipolar depression as the drug companies want doctors to believe, the doctors will instead prescribe brand named anti-psychotic medications, which in my clinical experience have very limited effectiveness in any clinical depression.
Of course, the authors of the study clearly did not recommend that the expensive brand-named atypical antipsychotics be used instead of antidepressants. They did not need to. The drug companies have other people who do that job for them. In fact, it is better for the researchers and for PhARMA if researchers do not recommend the other drugs, in order to maintain plausible deniability in case someone like me notices that the study is not what it says it is.
I got into a discussion with a nationally known psychiatrist, Dr. William Glazer, on LinkedIn about whether or not studies can be rigged, and he wanted me to provide an example of one that I thought had been. I brought up the article in question. It turns out that Dr. Glazer knew the lead author of the study, Dr. Gary Sachs, as they both had worked at Harvard, and he asked him about what I wrote.
The language that the two of them used provides an extremely good illustration of plausible deniability as used by PhARMA-influenced experts.
This is a long post and I apologize for that, but I want readers to appreciate the sophistication of how this is done.
I will include some of the exchanges we had, along with additional commentary describing what I suspect might be going on. These comments were not part of the original exchange, and are in brackets and italics. Some lay explanations for technical terms are also included in that format.
The original topic of the conversation was a recent article in the New York times by a Pharma critic, Dr. Marcia Angell. She had been a hero of mine before, but both Dr. Glazer and I agreed that she went far beyond her expertise in this particular article. Here is the conversation:
William Glazer, M.D. |
William Glazer MD • ...No matter how hard you try, it is impossible to "rig" a study to show that an ineffective drug is effective. Pharma studies might show statistical differences that don't have much clinical meaning (because they include large numbers of subjects), but they can't "rig" a study to show something that isn't there. The FDA requires that an antidepressant demonstrate statistical superiority to placebo in 2 separate studies. Over 30 antidepressants have run through that requirement and most of them are available today to help us treat patients.
Marcia Angell and the authors of the books she reviewed are capitalizing on the media and political attention that has come out of law suits (most of them settled) against pharmaceutical companies. With only one exception, these authors have not treated patients (as is mentioned in previous comments on this blog). These authors are utilizing media style, not scientific style to make their points. And they are doing some damage if ONE patient stops antidepressants after reading this misinformation and gets hospitalized, loses a job or commits suicide. And from what I hear, this is happening. If anyone has a patient who was influenced by this misinformation, I would appreciate hearing about it at [his e-mail address].
David Allen • I pretty much agree with all the points made in these comments, and I am extremely disappointed in Angell, because she has in the past discussed what is going on between Pharma and academia and raised many valid points. When it comes to psychiatry, though, she knows nothing. Antidepressants are among the most effective drugs in all of medicine.
I do have to take exception on one point made by Bill Glazer, though: the question of whether Pharma can rig studies. If they can't, why is it that 90% of head to head comparisons between me-too drugs [New medications that are almost the same as older medications and do the same thing] come out in favor of the sponsor's drug? Also, if the authors of a study - say on medication for supposed bipolar disorder in children - mix in subjects with "bipolar NOS" because they do not believe in the duration criteria, and also do not take into account that they may be just sedating acting-out children, then the drugs will look "effective." But effective for exactly what?
William Glazer MD • David your question is a good one, and fortunately, we have the time and space here to air it.
To me, the term "rig" has a ring of the sinister. It implies that the drug studies have hidden elements in their designs in order to dupe reviewers like the FDA, independent clinicians and a wide readership of practitioners into thinking that the drug in question is something that it is not. If that is what you mean by "rig", then I need to ask you to provide a substantive example or two.
If by "rig" you mean that the drug companies select design elements that will bring out the advantages of the particular agent in question, then that is a very different story. Having been involved in clinical research, I have observed that ALL studies are "rigged" in this context of the word regardless of who is funding the project. You have a hypothesis, you set out to prove your hypothesis to be true. You live and die by your findings. A faculty member conducting a line of investigation completely funded by NIMH will not get his/her papers published if he/she does not have positive results.
This kind of "rigged" is how the scientific process works. As far as I am concerned, it is a legitimate process because it is open to scrutiny via peer review, FDA review (which is far more stringent than any peer review conducted by journal) and reader review. As an aside, note that no manufacturer has won a claim status for superiority of its antidepressant over another one. That's because no study has definitively shown the superiority of one antidepressant over another.
If Company A designs a study comparing their antidepressant to Company B's and they choose for example, an advantageous dose for its product, anyone is able to read the detail and see what they did. [How is under-dosing a comparator drug not rigging the study even if it’s done openly?] That is our responsibility as clinicians to do. The results can be questioned - and they usually are by the manufacturer of the competing antidepressant, and we all can follow that dialogue.
Final point for now: What CAN happen (and does happen) is that Company A proceeds to market a study comparing antidepressants by "spinning" the results. At the end of the day, it is our fault if we buy it, and the company should be held liable for being irresponsible in its communications. I'd say the same for companies that hold back evidence of side effects - that is unacceptable and irresponsible.
So, David, I would be interested to see an example of "rigged" in the first context that I described above. I think that we should be careful about making generalizations about Pharma funded studies. They often bring in new information, and ultimately, innovation. They should not be thrown out wholesale - we'd end up going back to the stone age.
David Allen • Bill, thanks for your thoughtful response. First, I want to make clear that I am not advocating against industry sponsored studies. Of course they bring in new information. Second, I absolutely agree that physicians need to read the studies to see what was done, and it is ultimately up to the doctor to make an informed decision.
Unfortunately, most docs only read the abstracts - if that - which as you know, do not include the weaknesses of the study described in the discussion session. [Note as we proceed that Dr. Glazer does not address this point].
You are right about competing companes exposing the weaknesses of their competitors' brand-named drugs, but who is doing that as much for generics? [He does not address this point, either].
In the past drug companies have deep sixed studies with negative results and only presented the positive ones to the FDA and the public. Thankfully, I think this practice has been stopped. Replication of course is essential. [Ditto; this point is never addressed].
IMO, most of the studies of Lamictal may not be "rigged" but they are highly misleading. The outcome measure "time to next affective disorder episode" over [a period of] 18 months is almost meaningless. If a drug is prophylactic like lithium, then there wouldn't be many relapses at all in that period. If just 1% of the subjects actually responded to lamictal, it would beat placebo since the time to next episode would be over 18 months. This all could mean that we have much better drugs. But I've never seen this discussed in the articles.
I don't think it's a coincidence that we are hearing all this negative stuff about anti-depressants just as they are almost all going generic. I cannot prove it, but I suspect the drug companies are demonizing them hoping that doctors will prescribe atypicals instead. And that is exactly what I am seeing in patients referred to my residents' clinics. Interesting that the manufacturers of Paxil suddenly seemed to "discover" that they had strong evidence of teratogenicity [the drug may cause some birth defects in babies born to mothers taking the medication] from decades ago.
I believe the drug companies demonized benzo's after they went generic by wildly exaggerating their addictive potential. Whenever and wherever you see references to benzo's in the professional literature, there is an accompanying phrase to the effect that, "but of course they are addictive." I don't see any references to Atypicals that add, "but of course they can cause diabetes." And frankly, if I had to choose, I'd rather be addicted to a benzo than to insulin.
There is a widely-quoted study by Sachs et al in the NEJM claiming antidepressants don't work in bipolar patients, when any clinician who has used lithium for mania prophylaxis knows that they work wonderfully for bipolar depression as long as you have a mood stabilizer on board to prevent switching [from depression straight into a manic high]. The study does not say what percentage of the subjects had already failed other antidepressants, but the subjects were all referred by clinicians and were already on a mood stabilizer, and in the midst of an active episode of bipolar depression [meaning they were all being treated already but were not getting better, so they were referred for the study].
Only patients who had failed the two antidepressants used in the study were excluded; none of the patients who failed the multitude of other antidepressants were excluded. The article says some patients were tapered off of their other meds, but does not say what meds those were.
Furthermore, when some atypicals [antipsychotics like Abilify] became FDA approved for mania, patients on those meds were brought in as well as being on a "mood stabilizer." What all this means is that a significant percentage of the subjects in the study had failed at least one trial of an antidepressant, and perhaps two or three! And some also failed an atypical, which are being touted as anti-depressants these days. In other words, they used a treatment-resistant population and never said so, and shock of shock, placebo outperformed the antidepressants. I wrote e-mails to both of Dr. Sachs' email addresses nicely asking what percentage of the subjects in the study had failed other antidepressants. I never heard back.
That sounds like a rigged study to me.
[This is an excellent discussion of why studies of antidepressants are so notoriously difficult to do, but it is in no way a discussion of whether his journal article was misleading. Note also he uses the words “treatment-naïve” or “treatment-refractory” instead of the current psychiatric buzzword, “treatment-resistant.” This may seem like a trivial point, but many practitioner just lightly scan articles and discussions like this. Using the buzz word might get their attention more than the alternatives. It makes a difference!].