Wednesday, August 31, 2011

Plausible Deniability

Once rockets go up
Who cares where they come down?
That's not my department
Says Wernher von Braun
                                                  ~Tom Lehrer
Pharmaceutical companies and their marketing departments have been studying the psychology and the behavior of physicians for decades and have become masters of the subtle con job.  The goal of the con job is to convince physicians to prescribe new and expensive drugs when old generics will do the job better, much more cheaply, and sometimes with fewer side effects.

This con job often involves multiple components that seem to be acting independently but are being co-ordinated behind the scenes, and employs very subtle mind tricks to shape the thinking of the physician.  The influence techniques are also taught to their army of pharmaceutical representatives who visit doctors in their offices, and who also learn how to ingratiate themselves with the physician.

This post will focus on one of their techniques that I refer to plausible deniability.  Plausible deniability is defined as the believable denial of a fact or allegation, or of previous knowledge of a fact. The term most often refers to the denial of blame for wrongdoing. In the case that illegal or otherwise disreputable and unpopular activities become public, high-ranking officials and academic physicians alike may deny any awareness of such acts or any connection to the agents used to carry out such acts.

The term became notorious during the arms-for-hostages Iran-Contra scandal in 1986.  I use it here, however, to describe a strategy in which psychiatric "experts" who are paid directly by pharmaceutical companies advocate for non-FDA approved indications for brand-named drugs (the doing of which is supposed to be illegal) in such a way that they can deny that they are doing exactly that.

In my book, How Dysfunctional Families Spur Mental Disorders, I strongly critique the presentation of one particular study that purported to show that antidepressant medications, which are almost all going or have gone generic, are not effective in manic-depressive patients who are in the midst of a depressive episode.  The study seemed to me to have mainly used subjects who had already failed at least one and perhaps two or three other antidepressant medications, making it far less likely that they would respond to the drugs used in the study.

Nowhere in the journal article describing the study does it say that it is a study of treatment-resistant bipolar depression, as opposed to a study of garden-variety bipolar depression.  I figured it out by reading between the lines.  Even though the study itself was not rigged, any experienced clinician could have easily predicted that the chances were excellent that it would turn out exactly the way it did. The journal article was extremely misleading because it did not disclose the true nature of the subject sample. 

If antidepressants do not work in bipolar depression as the drug companies want doctors to believe, the doctors will instead prescribe brand named anti-psychotic medications, which in my clinical experience have very limited effectiveness in any clinical depression.

Of course, the authors of the study clearly did not recommend that the expensive brand-named atypical antipsychotics be used instead of antidepressants.  They did not need to. The drug companies have other people who do that job for them.  In fact, it is better for the researchers and for PhARMA if researchers do not recommend the other drugs, in order to maintain plausible deniability in case someone like me notices that the study is not what it says it is.

I got into a discussion with a nationally known psychiatrist, Dr. William Glazer, on LinkedIn about whether or not studies can be rigged, and he wanted me to provide an example of one that I thought had been.  I brought up the article in question.  It turns out that Dr. Glazer knew the lead author of the study, Dr. Gary Sachs, as they both had worked at Harvard, and he asked him about what I wrote. 

The language that the two of them used provides an extremely good illustration of plausible deniability as used by PhARMA-influenced experts.

This is a long post and I apologize for that, but I want readers to appreciate the sophistication of how this is done.

I will include some of the exchanges we had, along with additional commentary describing what I suspect might be going on.  These comments were not part of the original exchange, and are in brackets and italics.  Some lay explanations for technical terms are also included in that format.

The original topic of the conversation was a recent article in the New York times by a Pharma critic, Dr. Marcia Angell.  She had been a hero of mine before, but both Dr. Glazer and I agreed that she went far beyond her expertise in this particular article. Here is the conversation:

William Glazer, M.D.
David AllenMany of today's RCT's [randomized controlled drug studies] do in fact suck because of rigging by big PhARMA, and a tremendous quantity of mis- and overdiagnosis of mental disorders (e.g. bipolar disorder by doctors who completely disregard the requirements for duration and pervasiveness) is going on because doctors are paid more for medication checks than therapy. On the other hand, some mental illnesses respond better to meds than a lot of the conditions in internal medicine. It is a shame that Dr. Angell has apparently never treated, say, a melancholic depression. To discount widespread clinical experience because some psychiatrists are corrupt or incompetent is shameful.

William Glazer MD • ...No matter how hard you try, it is impossible to "rig" a study to show that an ineffective drug is effective. Pharma studies might show statistical differences that don't have much clinical meaning (because they include large numbers of subjects), but they can't "rig" a study to show something that isn't there. The FDA requires that an antidepressant demonstrate statistical superiority to placebo in 2 separate studies. Over 30 antidepressants have run through that requirement and most of them are available today to help us treat patients.

Marcia Angell and the authors of the books she reviewed are capitalizing on the media and political attention that has come out of law suits (most of them settled) against pharmaceutical companies. With only one exception, these authors have not treated patients (as is mentioned in previous comments on this blog). These authors are utilizing media style, not scientific style to make their points. And they are doing some damage if ONE patient stops antidepressants after reading this misinformation and gets hospitalized, loses a job or commits suicide. And from what I hear, this is happening. If anyone has a patient who was influenced by this misinformation, I would appreciate hearing about it at [his e-mail address].

David AllenI pretty much agree with all the points made in these comments, and I am extremely disappointed in Angell, because she has in the past discussed what is going on between Pharma and academia and raised many valid points. When it comes to psychiatry, though, she knows nothing. Antidepressants are among the most effective drugs in all of medicine.

I do have to take exception on one point made by Bill Glazer, though: the question of whether Pharma can rig studies. If they can't, why is it that 90% of head to head comparisons between me-too drugs [New medications that are almost the same as older medications and do the same thing] come out in favor of the sponsor's drug? Also, if the authors of a study - say on medication for supposed bipolar disorder in children - mix in subjects with "bipolar NOS" because they do not believe in the duration criteria, and also do not take into account that they may be just sedating acting-out children, then the drugs will look "effective." But effective for exactly what?

William Glazer MDDavid your question is a good one, and fortunately, we have the time and space here to air it.

To me, the term "rig" has a ring of the sinister. It implies that the drug studies have hidden elements in their designs in order to dupe reviewers like the FDA, independent clinicians and a wide readership of practitioners into thinking that the drug in question is something that it is not. If that is what you mean by "rig", then I need to ask you to provide a substantive example or two.

If by "rig" you mean that the drug companies select design elements that will bring out the advantages of the particular agent in question, then that is a very different story. Having been involved in clinical research, I have observed that ALL studies are "rigged" in this context of the word regardless of who is funding the project. You have a hypothesis, you set out to prove your hypothesis to be true. You live and die by your findings. A faculty member conducting a line of investigation completely funded by NIMH will not get his/her papers published if he/she does not have positive results.

This kind of "rigged" is how the scientific process works. As far as I am concerned, it is a legitimate process because it is open to scrutiny via peer review, FDA review (which is far more stringent than any peer review conducted by journal) and reader review. As an aside, note that no manufacturer has won a claim status for superiority of its antidepressant over another one. That's because no study has definitively shown the superiority of one antidepressant over another.

If Company A designs a study comparing their antidepressant to Company B's and they choose for example, an advantageous dose for its product, anyone is able to read the detail and see what they did. [How is under-dosing a comparator drug not rigging the study even if it’s done openly?] That is our responsibility as clinicians to do. The results can be questioned - and they usually are by the manufacturer of the competing antidepressant, and we all can follow that dialogue.

Final point for now: What CAN happen (and does happen) is that Company A proceeds to market a study comparing antidepressants by "spinning" the results. At the end of the day, it is our fault if we buy it, and the company should be held liable for being irresponsible in its communications. I'd say the same for companies that hold back evidence of side effects - that is unacceptable and irresponsible.

So, David, I would be interested to see an example of "rigged" in the first context that I described above. I think that we should be careful about making generalizations about Pharma funded studies. They often bring in new information, and ultimately, innovation. They should not be thrown out wholesale - we'd end up going back to the stone age.

David AllenBill, thanks for your thoughtful response. First, I want to make clear that I am not advocating against industry sponsored studies. Of course they bring in new information. Second, I absolutely agree that physicians need to read the studies to see what was done, and it is ultimately up to the doctor to make an informed decision.

Unfortunately, most docs only read the abstracts - if that - which as you know, do not include the weaknesses of the study described in the discussion session.  [Note as we proceed that Dr. Glazer does not address this point].

You are right about competing companes exposing the weaknesses of their competitors' brand-named drugs, but who is doing that as much for generics? [He does not address this point, either].

In the past drug companies have deep sixed studies with negative results and only presented the positive ones to the FDA and the public. Thankfully, I think this practice has been stopped. Replication of course is essential. [Ditto; this point is never addressed].

IMO, most of the studies of Lamictal may not be "rigged" but they are highly misleading. The outcome measure "time to next affective disorder episode" over [a period of] 18 months is almost meaningless. If a drug is prophylactic like lithium, then there wouldn't be many relapses at all in that period. If just 1% of the subjects actually responded to lamictal, it would beat placebo since the time to next episode would be over 18 months. This all could mean that we have much better drugs. But I've never seen this discussed in the articles.

I don't think it's a coincidence that we are hearing all this negative stuff about anti-depressants just as they are almost all going generic. I cannot prove it, but I suspect the drug companies are demonizing them hoping that doctors will prescribe atypicals instead. And that is exactly what I am seeing in patients referred to my residents' clinics. Interesting that the manufacturers of Paxil suddenly seemed to "discover" that they had strong evidence of teratogenicity [the drug may cause some birth defects in babies born to mothers taking the medication] from decades ago.

I believe the drug companies demonized benzo's after they went generic by wildly exaggerating their addictive potential. Whenever and wherever you see references to benzo's in the professional literature, there is an accompanying phrase to the effect that, "but of course they are addictive." I don't see any references to Atypicals that add, "but of course they can cause diabetes." And frankly, if I had to choose, I'd rather be addicted to a benzo than to insulin.

There is a widely-quoted study by Sachs et al in the NEJM claiming antidepressants don't work in bipolar patients, when any clinician who has used lithium for mania prophylaxis knows that they work wonderfully for bipolar depression as long as you have a mood stabilizer on board to prevent switching [from depression straight into a manic high]. The study does not say what percentage of the subjects had already failed other antidepressants, but the subjects were all referred by clinicians and were already on a mood stabilizer, and in the midst of an active episode of bipolar depression [meaning they were all being treated already but were not getting better, so they were referred for the study].

Only patients who had failed the two antidepressants used in the study were excluded; none of the patients who failed the multitude of other antidepressants were excluded. The article says some patients were tapered off of their other meds, but does not say what meds those were.

Furthermore, when some atypicals [antipsychotics like Abilify] became FDA approved for mania, patients on those meds were brought in as well as being on a "mood stabilizer." What all this means is that a significant percentage of the subjects in the study had failed at least one trial of an antidepressant, and perhaps two or three! And some also failed an atypical, which are being touted as anti-depressants these days. In other words, they used a treatment-resistant population and never said so, and shock of shock, placebo outperformed the antidepressants. I wrote e-mails to both of Dr. Sachs' email addresses nicely asking what percentage of the subjects in the study had failed other antidepressants. I never heard back.

That sounds like a rigged study to me.

William Glazer MDThanks David. Most of the examples that you refer to are ones in which there is transparency - you may question the authors' interpretation (or non-interpretation) of the data, but the reader is able to see from the details of the study what is going on. [Pharma knows jolly well that most psychiatrists are not going to question a bad outcome measure that sounds reasonable, although technically the study is not “rigged”]. I am currently pursuing your reference to the Sachs et al paper in the NEJM because you seem to feel that there is "rigging" in the first sense of the term as I discussed it above. I'll get back to you on that.

It's interesting how both you and I have our own preferred conspiracy theories about the attack on antidepressants. I seem to be blaming the psychologists and you are blaming the pharmaceutical industry (of course I am being overly dramatic here). But we certainly seem to be in a "fight or flight" response. We are both probably a little paranoid. :-)

David AllenBill, we may both be reacting to the attack on antidepressants because we both know they are damn good medications. It's OK to be paranoid when they are after you!!

Gary Sachs, M.D.
William Glazer MDDavid - getting back to the issues that you raised, I followed up on the NEJM article by Gary Sachs reporting the results of the STEP-BD study (Sachs et al: Effectiveness of Adjunctive Antidepressant Treatment for Bipolar Depression N Engl J Med 2007; 356:1711-1722 April 26, 2007). I spoke directly with Gary and shared your concerns with him. This is what he said, and he was happy to have me convey it through this medium. I hope it helps:

"In regard to the STEP-BD finding, the suggesting that it was somehow "rigged" surprises me. The study was an NIMH funded treatment effectiveness study. It sought to enroll a sample representative of treatment seeking patients. By all measures it did exactly that. If Dr Allen wants to say that our results may not generalize to treatment naive patients [patients who had never been exposed to any medications], he is correct. If he wants to say that treatment refractory patients are the one most likely to be seeking treatment, he may be right about that too. [These last two statements are an implicit admission that my primary concern about the study was spot-on correct - that most of the subjects were in fact treatment-resistant. 

However, notice how this specific wording, or a reasonable facimile thereof, is never clearly used.  And he still does not answer the basic question about what percentage of the subjects in the study had failed one or more previous antidepressants.  Nor does he address the rather blatant omission of a key descriptor of the subject population in the journal article describing the study].  While treatment responders would have little motivation to enter a usual RCT, STEP-BD also enrolled euthymic patients, Of course if patients stayed well, they would never have been eligible to the randomized acute depression study.

[This is an excellent discussion of why studies of antidepressants are so notoriously difficult to do, but it is in no way a discussion of whether his journal article was misleading.  Note also he uses the words “treatment-naïve” or “treatment-refractory” instead of the current psychiatric buzzword, “treatment-resistant.” This may seem like a trivial point, but many practitioner just lightly scan articles and discussions like this. Using the buzz word might get their attention more than the alternatives. It makes a difference!].

I also agree there are many bipolar patients that apparently respond to standard antidepressants in clinical practice. The problem is that this has never been demonstrated in an adequately powered clinical trial. [True. Just as Glazer had pointed out, he did not know with certainty how this study would come out until he did it. On the other hand, as I mentioned in the intro to this exchange, any experienced clinician would have predicted that it was extremely likely that this study would in fact come out just as it did. If it had not, one might suspect that it was poorly designed. This was a well-designed study of antidepressant response in treatment resistant patients].

Furthermore, placebo did not beat the standard antidepressant in STEP-BD. [Here he is talking about a completely unrelated part of the STEP-BD study. A sly diversion from the topic at hand]. However, since the psychosocial interventions, did beat the control condition, it is hard to argue that the study was "rigged" to favor patent medicines.  [I wonder: how many psychotherapists read about drug studies in the New England Journal of Medicine?  Dr. Sachs and PhARMA can rest assured that this point would not be widely reported.  Also, speaking as a practitioner and strong advocate of psychotherapy, doing psychotherapy with a patient in a real bipolar depression is a complete and utter waste of time.  As we all know, having zero energy, thinking at a snail’s pace, being totally overcome with an all-encompassing sense of helplessness and hopelessness, and a blanket sense of the utter futility of everything is a perfect recipe for a successful course of psychotherapy.  If subjects in the STEP-BD sample responded to it, I would have to question the diagnoses of the subjects.

My guess is that they may not have been, at the time of the study, in a bipolar depression at all but were reacting to purely environmental problems. 

Also, throwing a bone to psychotherapy is another frequent PhARMA speaker's tactic so they can claim a presentation was fair and balanced, even though it is done in such a way that most of an audience will ignore it.].

I wish I could have responded to every email about the STEP-BD results. I did the best I could but may have missed his inquiry."  [How convenient. All I asked in my TWO e-mails was, in a very pleasant and professional tone, what percentage of the subjects in the study had failed previous antidepressants.  I also signed it with my title, Professor of Psychiatry at UT, so it just was not just any old inquiry.  Guess he just missed them].

David AllenBill, that defense is completely inadequate. Why did the article not say what percentage of the sample had failed a previous antidepressant? We know from the STAR-D study that there is diminishing returns when a second and third antidepressant are tried. I guess you could say that the article was transparent, as Dr. Sachs seems to be arguing, because I could read between the lines and see what they did. Perhaps not including this statistic was an innocent oversite, but it sounds like when you talked to him he still did not answer the question. The line between rigging and being extremely misleading is what we seem to disagree about, but that's just semantics. "Opinion leaders" and throw away journals are already treating the idea that bipolar depression does not respond to antidepressants as an established fact, when it is obviously complete baloney.

William Glazer MDWith all due respect David, you are not creating a fertile ground for an academic discussion about the NEJM STEP-BD study by Dr Sachs and colleagues. [Gee, was I being too mean?]. Dr Sachs read your last comment and respectfully declined to continue the dialogue because he flet that you sounded like you had your mind made up and there was no room for productive collegial debate. [Why would I change my mind when he confirmed that my suspicions were correct? And he was still refusing to answer my central question].  

I re-read the Sachs article and I think that your focus in on the narrow question of the effect of the history of antidepressant exposure on the outcome is interesting, but it hardly succeeds in establishing that this NIMH funded study was "rigged".

First, and most importantly, this was a randomized trial. This means, as I know you know, that it is likely that patients who failed on previous antidepressants were equally exposed to one of the two study conditions: antidepressant or placebo. Second, the authors do report on the history of depressive episodes in the patients assigned to drug versus placebo. And there was not the slightest hint of a difference in the randomization. [All true, and completely irrelevant to the question I was asking and the issue at hand about the deception in the presentation of the data]. 

Third, Dr Sachs and colleagues IN THE ABSTRACT did not say that they had proven that antidepressants were ineffective in bipolar depression. The reported their finding and immediately called for additional long-term well designed studies. [Pharma does not need for Dr. Sachs to say that, and probably would not want him to.  Sachs was just a cog in the machine.  Pharma has a lot of other paid lackeys out to “spread the word,” and they will be the ones recommending atypicals as an alternative.  This is exactly what I mean: Dr. Sachs has plausible deniability].

I don't know what thought leaders or throw away journals you are referring to, but if you go to the original source - the Sachs et al NEJM article, I think you find a well-designed, study that adds incrementally to our knowledge. [Is he just playing dumb here?  He was featured in a whole series of CME tapes sponsored by drug companies called PsychLink.  Does he really not know that “opinion leaders” is the term big Pharma uses to describe their paid-off “experts?”  And if he hasn’t seen the idea that antidepressants don’t work in bipolar depression anywhere else, then he has his head in the psychiatric sand]. A casual read of that paper makes it evident that it does not recommend a practice policy. [Of course not.  It doesn’t have to.  See my commentary at the end of the last paragraph]. 

It is a fine piece of work that has no evidence of being "rigged" in the first sense of the term discussed above. How others interpret and spin this study may be grist for our mill here [MAY be??], but we are talking about high quality research. [It was high quality research that was presented in an extremely misleading way]. We certainly need to know more, and perhaps you can interest someone to design a study that specifically tests your hypothesis. [Sachs and Glazer’s responses in fact tested my hypothesis, and my hypothesis won]. 

David AllenAlso with all due respect, Bill, I do not understand how you can say that omitting from the journal article the highly relevant fact that the subjects in the study were mostly treatment-resistant patients - some probably highly so - is defensable. I'm asking a very simple and highly relevant question of Dr. Sachs about the journal article's presentation of the study, NOT the relative merits of the study itself. If he doesn't want to answer it, I think that speaks to the validity of my argument that the presentation was purposely misleading. I doubt his refusal to answer is because of my attitude. And do you mean to suggest that he doesn't know how his conclusions are being used in the field? Please.

[A Third Party] Dr Allen's comments make sense to me..... Am I missing something???

William Glazer MDSorry [third party], but I have taken it as far as I can take it. [He could not take it any further because he had lost the argument].


  1. I think Glazer's in too deep with Big Pharma to provide objective discourse. Perhaps, he needs a dose of reality therapy from the other "Dr. G" (Glasser).
    First off, we don't even know what being treated in these industry sponsored studies, because diagnostic and rating instruments are simply not reliable, and for suggestive and medication-inclined individuals readily willing to volunteer as guinea pigs, there is a tendency to overendorse symptoms, so what we end up with is a morass of clinical nondata. Now if these patients were also screened with reliable objective personality tests which include validity scales such as the MMPI or Millon, many with Cluster B personality disorders would have to be eliminated.
    As far as statistical differences between active drugs and placebo, I'm not surprised that these occur, but certainly not in every study. For example, in a New England Journal of Medicine, January, ’08 publication (“Selective Publication of Antidepressant Trials and Its Influence on Apparent Efficacy”), researchers compared drug efficacy inferred from published studies with drug efficacy reported to a mandatory U.S. government registry of clinical trials, and found that of the 74 studies on 12 antidepressants, 38 produced positive results for the drug. All but one of those studies were published. However, when it came to the 36 studies with negative or questionable results, only three were published and another 11 were turned around and written as if the drug had worked, while 22 were not mentioned at all.
    Perhaps, a more appropriate comparison, say for an antianxiety medication, or even an antidepressant for that matter, would be a head-to-head with Benadryl, instead of placebo. We all know that Benadryl is a pretty good sedative, and for many of these patients, a good night's sleep goes a long way in improving such symptoms. In my experience, many of my patients with subjective anxiety/depression (the vast majority I see)improve as a result of beneficial side effects, rather than the purported intended effects. But they don't generally maintain sustained improvement until they commit to therapy to address ingrained personality dysfunction, and that's a hard sell these days.

  2. Dr Z,

    Better yet, how about comparing them head to head to a benzo?

    The diagnostic morass that you describe so eloquently is, in my opinion, the real reason that studies of antidepressants in major depression UNDERESTIMATE their effectiveness.

  3. David, I agree that benzos are the closest class of drugs we have to a panacea in our field, and would probably be superior to the established gold standards for every diagnostic group in comparison studies, though the short acting ones are perhaps too true to be good.
    Regarding the underestimation of the effectiveness of antidepressants and mood stabilizers in genuine Major Depressive Disorder/Bipolar Disorder, I also couldn't agree more. I find it ironic that on one hand, while today we probably have the potential to delineate such disorders from the rest with advances in neuroimaging, that technology may never come to fruition...simply because, on the other hand, these illnesses have been vastly diluted by the influences of Big Pharma and enterprising charlotans (masquerading as academicians), thereby making them useless to study.

  4. Examining this discussion purely as a logician, the amount and degree of the fallacies used is alarming. To begin with, both Drs. Glazer and Sachs repeatedly beg the ultimate question, which, as I understand it is: Was the study deliberately biased by creating a framework strongly favoring subjects known to be resistant to anti-depressants?

    By shifting (another logical error) into spurious issues (the straw man fallacy) such as whether the study was nefarious or malicious, or whether "rigging," in the sense of replacing the actual data with fabricated numbers, occurred.

    Since Dr. Allen wasn't positing any of these straw men, it was rather painful to read the continuing attempts by Dr. Glazer to morph the dialogue into an irrelevant direction.

    As for "plausible deniability," Pharma, as ALL massive businesses today, employ large staffs, which include psychologists, specifically for the purpose of creating messages that appear to say something while actually being meaningless. That's the ULTIMATE plausible deniability. For example, when a medication is "virtually certain" to produce a result, most physicians and even a higher percentage of their patients take that as pretty much a guarantee. As a logician and a seasoned attorney, "virtually certain" has no meaning. It's a nullity.

    In a roundabout way, I'm suggesting that the only way you'll EVER get Dr. Sachs to address your issues is to get him into a courtroom, under oath, and cross-examine him. And you'll need a judge who will force him to answer the questions asked, rather than pontificate on what he'd rather talk about.

    In the same vein, regardless of what the (IMO) worthless drug reps and other flaks say or what the advertising campaigns promise, Pharma will NEVER actually say such things as "atypicals are better for the treatment of depression than anti-depressants." If they did that, they might be held liable for any damage caused by reliance on such information. No, Pharma will infer, suggest, show men smiling at the thought of an erection lasting over four hours, etc. But I've been far too jaded for far too long to think that anything will ever change until the government is disconnected from corporate money.