Pages

Showing posts with label plausible deniability. Show all posts
Showing posts with label plausible deniability. Show all posts

Friday, January 25, 2013

Disclaimers: The Good, Instead of the Bad and the Ugly


In an earlier post Words that Work, I discussed the idea from political consultant and polster Frank Luntz that “It’s not what you say, it’s what people hear.” This blog has also discussed in detail how talking to one’s family about dysfunctional patterns requires just the right type of wording and tone of voice. 

Disclaimers can be used to alter listeners’ perceptions about what another person is saying.  They can be very helpful in making something that otherwise might be perceived as an attack or accusation much more palatable.  

It is also true that disclaimers can be used in for more nefarious purposes, such as in deceptive propaganda. I wrote about this latter purpose in two previous posts on plausible deniability 8/31/11 and 6/19/12.

The odious purpose is summed up very well in the cartoon below:



 

In this post I will focus on the use of disclaimers for doing good– their advantageous employment in discussions that aim to achieve solutions to ongoing problems within a family. As a psychotherapist, I find them to be very useful with my patients, and I also coach my patients on how to use then when they attempt metacommunication with family members.

 

Disclaimers are pre-statements that acknowledge the potentially unpleasant nature of an issue at hand, proclaim the lack of any ill intent on the part of the the person making the statement that follows the disclaimer, and give others the benefit of the doubt concerning their motivation for engaging in problematic behavior. Disclaimers can also be used to avoid power struggles that tend to occur when someone might be perceived as sounding like a know-it-all or like someone trying to “put one over” on the other person. 

Disclaimers can make it possible to bring up for discussion just about anything. Of course, tone of voice is extremely important.  If someone is trying to bring up problematic family behavior with other members of the family, a scolding or sarcastic tone will automatically nullify any advantage conferred through the use of disclaimers.  Usually, tone should be matter of fact as well as friendly sounding for maximum effect.

In the type of psychotherapy I do, unified therapy, I frequently need to bring up and explore a patient’s problematic or counterproductive behavior, or describe potentially unflattering hypotheses about the patient’s family relationship patterns. Patients have a natural tendency to become defensive in these situations, and a therapist runs the risk of provoking a negative reaction of some sort. The use of a disclaimer often makes the initiation of such discussions more palatable to the patient. 

When making interpretations regarding a patient or his or her family, the therapist’s use of disclaimers leads the patient to become less likely to get defensive and more likely to consider the merits of the therapist’s proposition. Later on in unified therapy, therapists teach patients to make use of disclaimers during metacommunication with their family about relationship patterns and issues. 

Disclaimers can be used in innumerable ways. A few examples will be given here of the types of situations in which disclaimers are useful. The examples are also meant to give the reader a general idea about how disclaimers should be phrased. 

First, when bringing up someone else’s seemingly provocative behavior, the metacommunicator might say something such as, “I know you’re not trying to anger me when you do that, but when you do [such and such], it would be easy for someone who did not know you so well to get the wrong idea.”  

Second, in situations where the Other has a hard time discussing a certain topic, one might say, “I know this is hard to talk about, but it sounds like it is really important.”

Third, family members often hold the belief that certain behavior from another family member is purposely meant to “ask for” or elicit a nasty response.  They may be reluctant to say so, however, for fear they will be branded as self-serving or even crazy.  The metacommunicator can often get the Other to acknowledge such thoughts by putting the burden of “craziness” on himself or herself:  “This is probably going to sound crazy, but I wonder if sometimes you get the idea that mom wants you to steal money from her. After all, she keeps leaving it in plain sight.”

Fourth, disclaimers are useful for bringing up for discussion the obvious ways that the Other’s behavior causes problems without sounding like a critical parent or insulting the Other’s intelligence. The metacommunicator might say, “At risk of sounding just like Mom, and as I’m sure you already know, attacking Dad does not seem to solve anything.”

Fifth, many times a metacommunicator has an hypothesis about what might be going on in the family, but is not sure. However, the Other may take umbrage at the implications of such a hypothesis. This happens for many reasons, including that the possibility that the hypothesis in question is flat out wrong. Giving the other an “out” so that he or she can easily reject the proposal without getting into an argument can solve this problem. One can say, “I don’t know if this is accurate or not, but I wonder if [such and such] might be happening. What do you think?

Sixth, whenever a metacommunicator brings up the behavior of family members who seem to be contributing to the speaker’s problems, others will often defend their family. They do so despite the fact that they themselves are at wit’s end with the relative that is being discussed. Defending one’s family from a perceived attack even if one is angry at them oneself is quite a natural reaction, but may preclude much useful discussion about the possible reasons for the family member’s misbehavior.  A useful disclaimer that may prevent this from happening is, “I’m not trying to turn Dad into a villain, but…”

Last, metacommunicators should also make use of disclaimers when explaining their thoughts and reactions to significant others. This is part and parcel of the important strategy of giving family members the benefit of the doubt as to their motivation when asking them to be aware of and change behavior that the metacommunicator finds problematic.

For example, they might say, "I know you wanted me to be successful, but it often appeared to me that you did not" or "I know you really do care about me but..."  If the other then says that the metacommunicator is stupid for thinking or feeling the way they do, the metacommunicator can humbly say, “Maybe so, but that’s how it looks to me, and I’m sure you don’t want me to get the wrong idea about you, so I thought it would be important to let you know how this looks to me.”

Of course, disclaimers do not always have the desired effect, but they do often enough that employing them is an excellent strategy.

Tuesday, June 19, 2012

Disease Mongering in a Respected Journal and Plausible Deniability




In my post of August 31, 2011, Plausible Deniability, I illustrated how doctors under the sway of pharmaceutical companies widely distribute a completely invalid “take home message” to readers of journal articles and those who listen to academic-sounding presentations, while simultaneously providing themselves with an “out” so that they can deny doing just that.  Some of these strategies have been created from information gathered from the drug company marketing departments' intensive research into physicians and the way they think (see post: Physicians As Unwitting Research Subjects, 1/3/12). 

Apparently, these strategies are widely disseminated to physicians and researchers working with Pharma.  They are just too common.  A great example occurred in a rebuttal to a letter to the editor that I and several of my partners in crime (Peter I. Parry, Robert Purssey, Glen I. Spielmans, Jon Jureidini, Nicholas Z. Rosenlicht, David Healy, and Irwin Feinberg) managed to get published in the June 2012 issue of the Archives of General Psychiatry.  The Archives is considered one of the two top journals in psychiatry.

The letter was highly critical of a study that was published in a previous issue.  The article was one I blogged about in a previous post (More Disease Mongering in a Respected Journal, 8/13/11).  The gist of our published letter was described in that post, and I will not repeat it here.

However, let me use the rebuttal to our letter, printed in the same issue of the Archives, to illustrate how the authors avoid actually addressing the criticisms in the letter and deny that they meant to conclude from their "study" that which was highly implied by their journal article.  The latter issue is what I previously referred to as plausible deniability

Please keep in mind that when journals publish letters to the editor that are critical of one of their published studies, they allow the authors of the original study to respond to the criticisms, but that is where it ends.  They do not give letter writers the chance to respond in the journal to the rebuttal.  (It is a situation similar to that of reporters at a presidential press conference who are not allowed to ask follow-up questions).  So I’m doing it here.  Next to what they wrote in said rebuttal, I will provide my own commentary.

We are pleased to respond to the points raised by Allen et al, some of which take material out of context and quote news media articles beyond our control. For example, the letter states that “The message is that almost half the patients with a major depressive episode have undiagnosed bipolar disorder and are ‘not receiving necessary mood stabilizer treatment.’” The authors are well aware of exactly how the news media were going to interpret their study.  Ditto doctors who read the article.  The drug companies have apparently taught these authors that readers will routinely ignore the disclaimers that they list next in their rebuttal – a case of plausible deniability.  The article is designed to give a very specific “take home message.”  The success of this strategy is illustrated by those very news stories over which they are now saying they have no control.  Of course they don’t need to have direct control to achieve this goal.

Our actual statements are: "Based on these studies and the major differences in treatment guidelines for MDD [major depressive disorder] and bipolar disorder, we recommend that, among patients with MDEs [major depressive episodes], the presence of bipolar features, including all those with significant predictive value reported in this study, should be investigated carefully before a decision is made to prescribe antidepressants. If patients exhibit bipolar symptoms that impair everyday functioning, treatment with a mood stabilizer or an atypical antipsychotic may be useful." The take home message from what they “actually said:” exactly what we said it was.  This paragraph subtly equates "bipolar features" with agitation seen in major depressive disorder - a fact nowhere in evidence.  


This conflation is even more pronounced in the abstract of the article (the short summary at the beginning of the article which is usually the only thing that most busy physicians actually read). The introduction states "Many patients with major depressive episodes who have an underlying but unrecognized bipolar disorder receive pharmacologic treatment with ineffective regimens that do not include mood stabilizers."  This sounds like the article is going to demonstrate unrecognized signs of bipolar disorder and will "orient" anyone who reads the whole thing to think along those lines.

They assert that “The study’s findings are based on a ‘bipolar specifier’ requiring ‘no minimum duration of symptoms’ and ‘no exclusion criteria,’ ” and that “Any subject who came to psychiatric attention with an angry, agitated, or elated response to environmental triggers or psychoactive substances might have met criteria for ‘bipolarity.’ ”  

The criteria, stated in the “Methods” section of our article,1(p793) were (1) an episode of elevated mood, an episode of irritable mood, or an episode of increased activity with (2) at least 3 of the symptoms listed under Criterion B of the DSM-IV-TR …The minimum duration of symptoms required for a hypomanic episode was 1 day. Here the authors are flat out contradicting themselves! I quote from the original article itself: “No minimum duration of symptoms was required and no exclusion criteria were applied.” (page 793).  And exclusion criteria in the article do not exclude active drug abusers, which we brought up and the authors just ignore in their rebuttal.

 We assessed the duration reported for hypomanic episodes in 5 groups. Among subjects with major depressive episode with hypomanic episodes, 7.8% reported episodes of 1 day’s duration; 2 to 3 days’ duration was more frequent than 4 to 6 days.  Even if they did have a minimum duration criteria, the DSM criteria for even a hypomanic episode is four days.  Really, one day? In patients who met criteria for major depressive disorder?  Riiiight.

…associated with (3) at least 1 of the 3 following consequences: unequivocal and observable change in functioning uncharacteristic of the person’s usual behavior, marked impairment in social or occupational functioning observable by others, or requiring hospitalization or outpatient treatment.  Neither the article nor the rebuttal tells us how the study doctors made the determination that there was an unequivocal  “change in functioning uncharacteristic of the person’s usual behavior. “  Especially since under their rules you only have to agitated for a day, and if you took cocaine or had a big fight with your mother, you might have an unequivocal change in your “usual” functioning. What the phrase is supposed to mean is that the patient’s functioning has unequivocally changed under any and all environmental contingencies.  They would have to be more reactive than they usually are to all unpleasant situations to a similar degree. 

So how do would the study doctors know this?  Did they take the patient’s or a family member’s word for it?  I can tell you beyond a shadow of a doubt that patients rarely really understand what psychiatrists mean by this phrase.  The only way a doctor can know this is the case is to observe the patients several times over several weeks, both during and outside of the specified time period.  


Even a close approximation would require taking an extensive psychosocial history including evaluating current environmental stresses as well as an exploration of the nature, past history of, and current status of the subjects relationships with spouses, lovers, parents, and children.  Maybe they did that, but I doubt it, because doctors like these tend to denigrate the importance of such factors in favor of “disease” explanations.  And it would take a LOT of time.

No exclusion criteria for manic/hypomanic episodes associated with antidepressant or other drug use were applied. So people who got agitated from a side effect of an antidepressants were not excluded by their own admission.  Someone gets a side effect from a drug, and that proves they are manic? 

Importantly, the initial eligibility criterion was that patients have presented to clinical settings for evaluation and treatment of a major depressive episode per DSM-IV-TR criteria. These sequential criteria, applied by senior psychiatrists in each country, are entirely inconsistent with the assertion that the psychiatrists conducting the assessments enrolled “any subject who came to psychiatric attention with an angry, agitated, or elated response to environmental triggers.” 


The statement that 23.2% of subjects experienced elevated or irritable mood triggered by antidepressants did not “define the subjects as having ‘bipolar disorder.’” Rather,it addresses the DSM A criteria, which are essential, but not sufficient, for diagnosis of bipolar disorder. As Figure 1 in our article shows, mood lability while taking antidepressants occurred in 55.8% of bipolar specifier–positive vs 23.0% of bipolar specifier–negative subjects (odds ratio, 1.7;95% CI, 1.4-2.0) and mania/hypomania while taking antidepressants occurred in 37.2% of bipolar specifier–positive vs 3.4% of bipolar specifier–negative subjects (odds ratio, 5.7; 95% CI, 4.4-7.5).  Sorry, but with this paragraph the authors are still implying that their subjects MAY be bipolar, and assumes precisely what the article is supposed to show – that a patient who is agitated when depressed could have a manic symptom.  So if patients with an agitated depression are more likely to become more agitated on an antidepressant than depressed patients without agitation, that is supposed to show that they might be bipolar?  Only by circular reasoning.

Allen et al view their position as part of a “debate” about the “ever-widening bipolar spectrum.” We consider data, not debates, as central to the progress in the scientific understanding of mood disorders.  Ha!  This is a brazenly outrageous statement. The “debate” is specifically ABOUT "data" like theirs – both its validity and what it means.

They make several references to borderline personality disorder. The BRIDGE study assessed for comorbid diagnoses in all subjects. Five hundred thirty-two patients (9.3%)met DSM-IV-TR criteria for borderline personality disorder. This large sample provides an opportunity to analyze patients who met borderline criteria vs those who did not. We are completing a manuscript that will provide useful evidence on this subject. Maybe they should have said this in the original article.  But we know from the work of Zimmerman and others (My Psychology Today blogpost 12/11/11) that many patients who have borderline personality disorder are misdiagnosed.

Allen et al cast unseemly aspersions that the BRIDGE study was a vehicle to promote sales of an antipsychotic drug sold by sanofi-aventis. sanofi-aventis has no antipsychotic with an indication for bipolar disorder.  Here the study authors are being complete weasels.  The misleading point is contained in the phrase “with an indication for bipolar disorder.”  What they say is literally true - in the United States. Unfortunately, Sanofi does have an antipsychotic drug called amisulpiride (brand name, Solian). In fact, in the United States, it is not FDA-approved for any indication, let alone for bipolar disorder.   


However, Solian is approved and widely marketed in Europe and Australia, and at least according to Wikipedia, used for bipolar disorder.  (This may be why the study was conducted overseas). In addition, Sanofi also sells a preparation of depakote, which while an anticonvulsant and not an antipsychotic, is widely used in both actual and misdiagnosed bipolar disorder. 

Besides, as I described in my post of 6/12/12, marketing for off-label uses of drugs for bipolar disorder is unequivocally rampant.  Maybe the authors didn’t know this?  NOT.


We know of no evidence that this was the case at any stage of development and execution of the BRIDGE study. Sanofiaventis ceased financial support for analyses of the study in 2010. All work subsequently conducted has been achieved by our local funds. The drug company got out of the game just in time for the authors to claim they were not biased due to the funding source. Actually, the original article says “The sponsor of this study (sanofi aventis) was involved in the study design, conduct, monitoring, data analysis, and preparation of the report.” 


In addition, all of the clinicians recruited for the study received fees, on a per patient basis, from Sanofi-Aventis in recognition of their participation in the study. The key lead authors, all with significant Pharma connections, did not disclose their other pharmaceutical company ties.  These authors: Allan H. Young, MD, Jules Angst, MD, Jean-Michel Azorin, MD, Eduard Vieta, MD, Guilio Perugi, MD, Alex Gamma, PhD, Charles L. Bowden, MD.  


They should be ashamed of themselves.

Wednesday, August 31, 2011

Plausible Deniability

Once rockets go up
Who cares where they come down?
That's not my department
Says Wernher von Braun
                                                  ~Tom Lehrer
Pharmaceutical companies and their marketing departments have been studying the psychology and the behavior of physicians for decades and have become masters of the subtle con job.  The goal of the con job is to convince physicians to prescribe new and expensive drugs when old generics will do the job better, much more cheaply, and sometimes with fewer side effects.

This con job often involves multiple components that seem to be acting independently but are being co-ordinated behind the scenes, and employs very subtle mind tricks to shape the thinking of the physician.  The influence techniques are also taught to their army of pharmaceutical representatives who visit doctors in their offices, and who also learn how to ingratiate themselves with the physician.

This post will focus on one of their techniques that I refer to plausible deniability.  Plausible deniability is defined as the believable denial of a fact or allegation, or of previous knowledge of a fact. The term most often refers to the denial of blame for wrongdoing. In the case that illegal or otherwise disreputable and unpopular activities become public, high-ranking officials and academic physicians alike may deny any awareness of such acts or any connection to the agents used to carry out such acts.

The term became notorious during the arms-for-hostages Iran-Contra scandal in 1986.  I use it here, however, to describe a strategy in which psychiatric "experts" who are paid directly by pharmaceutical companies advocate for non-FDA approved indications for brand-named drugs (the doing of which is supposed to be illegal) in such a way that they can deny that they are doing exactly that.



In my book, How Dysfunctional Families Spur Mental Disorders, I strongly critique the presentation of one particular study that purported to show that antidepressant medications, which are almost all going or have gone generic, are not effective in manic-depressive patients who are in the midst of a depressive episode.  The study seemed to me to have mainly used subjects who had already failed at least one and perhaps two or three other antidepressant medications, making it far less likely that they would respond to the drugs used in the study.

Nowhere in the journal article describing the study does it say that it is a study of treatment-resistant bipolar depression, as opposed to a study of garden-variety bipolar depression.  I figured it out by reading between the lines.  Even though the study itself was not rigged, any experienced clinician could have easily predicted that the chances were excellent that it would turn out exactly the way it did. The journal article was extremely misleading because it did not disclose the true nature of the subject sample. 

If antidepressants do not work in bipolar depression as the drug companies want doctors to believe, the doctors will instead prescribe brand named anti-psychotic medications, which in my clinical experience have very limited effectiveness in any clinical depression.

Of course, the authors of the study clearly did not recommend that the expensive brand-named atypical antipsychotics be used instead of antidepressants.  They did not need to. The drug companies have other people who do that job for them.  In fact, it is better for the researchers and for PhARMA if researchers do not recommend the other drugs, in order to maintain plausible deniability in case someone like me notices that the study is not what it says it is.

I got into a discussion with a nationally known psychiatrist, Dr. William Glazer, on LinkedIn about whether or not studies can be rigged, and he wanted me to provide an example of one that I thought had been.  I brought up the article in question.  It turns out that Dr. Glazer knew the lead author of the study, Dr. Gary Sachs, as they both had worked at Harvard, and he asked him about what I wrote. 

The language that the two of them used provides an extremely good illustration of plausible deniability as used by PhARMA-influenced experts.

This is a long post and I apologize for that, but I want readers to appreciate the sophistication of how this is done.

I will include some of the exchanges we had, along with additional commentary describing what I suspect might be going on.  These comments were not part of the original exchange, and are in brackets and italics.  Some lay explanations for technical terms are also included in that format.

The original topic of the conversation was a recent article in the New York times by a Pharma critic, Dr. Marcia Angell.  She had been a hero of mine before, but both Dr. Glazer and I agreed that she went far beyond her expertise in this particular article. Here is the conversation:

William Glazer, M.D.
David AllenMany of today's RCT's [randomized controlled drug studies] do in fact suck because of rigging by big PhARMA, and a tremendous quantity of mis- and overdiagnosis of mental disorders (e.g. bipolar disorder by doctors who completely disregard the requirements for duration and pervasiveness) is going on because doctors are paid more for medication checks than therapy. On the other hand, some mental illnesses respond better to meds than a lot of the conditions in internal medicine. It is a shame that Dr. Angell has apparently never treated, say, a melancholic depression. To discount widespread clinical experience because some psychiatrists are corrupt or incompetent is shameful.

William Glazer MD • ...No matter how hard you try, it is impossible to "rig" a study to show that an ineffective drug is effective. Pharma studies might show statistical differences that don't have much clinical meaning (because they include large numbers of subjects), but they can't "rig" a study to show something that isn't there. The FDA requires that an antidepressant demonstrate statistical superiority to placebo in 2 separate studies. Over 30 antidepressants have run through that requirement and most of them are available today to help us treat patients.

Marcia Angell and the authors of the books she reviewed are capitalizing on the media and political attention that has come out of law suits (most of them settled) against pharmaceutical companies. With only one exception, these authors have not treated patients (as is mentioned in previous comments on this blog). These authors are utilizing media style, not scientific style to make their points. And they are doing some damage if ONE patient stops antidepressants after reading this misinformation and gets hospitalized, loses a job or commits suicide. And from what I hear, this is happening. If anyone has a patient who was influenced by this misinformation, I would appreciate hearing about it at [his e-mail address].


David AllenI pretty much agree with all the points made in these comments, and I am extremely disappointed in Angell, because she has in the past discussed what is going on between Pharma and academia and raised many valid points. When it comes to psychiatry, though, she knows nothing. Antidepressants are among the most effective drugs in all of medicine.

I do have to take exception on one point made by Bill Glazer, though: the question of whether Pharma can rig studies. If they can't, why is it that 90% of head to head comparisons between me-too drugs [New medications that are almost the same as older medications and do the same thing] come out in favor of the sponsor's drug? Also, if the authors of a study - say on medication for supposed bipolar disorder in children - mix in subjects with "bipolar NOS" because they do not believe in the duration criteria, and also do not take into account that they may be just sedating acting-out children, then the drugs will look "effective." But effective for exactly what?


William Glazer MDDavid your question is a good one, and fortunately, we have the time and space here to air it.

To me, the term "rig" has a ring of the sinister. It implies that the drug studies have hidden elements in their designs in order to dupe reviewers like the FDA, independent clinicians and a wide readership of practitioners into thinking that the drug in question is something that it is not. If that is what you mean by "rig", then I need to ask you to provide a substantive example or two.

If by "rig" you mean that the drug companies select design elements that will bring out the advantages of the particular agent in question, then that is a very different story. Having been involved in clinical research, I have observed that ALL studies are "rigged" in this context of the word regardless of who is funding the project. You have a hypothesis, you set out to prove your hypothesis to be true. You live and die by your findings. A faculty member conducting a line of investigation completely funded by NIMH will not get his/her papers published if he/she does not have positive results.

This kind of "rigged" is how the scientific process works. As far as I am concerned, it is a legitimate process because it is open to scrutiny via peer review, FDA review (which is far more stringent than any peer review conducted by journal) and reader review. As an aside, note that no manufacturer has won a claim status for superiority of its antidepressant over another one. That's because no study has definitively shown the superiority of one antidepressant over another.

If Company A designs a study comparing their antidepressant to Company B's and they choose for example, an advantageous dose for its product, anyone is able to read the detail and see what they did. [How is under-dosing a comparator drug not rigging the study even if it’s done openly?] That is our responsibility as clinicians to do. The results can be questioned - and they usually are by the manufacturer of the competing antidepressant, and we all can follow that dialogue.

Final point for now: What CAN happen (and does happen) is that Company A proceeds to market a study comparing antidepressants by "spinning" the results. At the end of the day, it is our fault if we buy it, and the company should be held liable for being irresponsible in its communications. I'd say the same for companies that hold back evidence of side effects - that is unacceptable and irresponsible.

So, David, I would be interested to see an example of "rigged" in the first context that I described above. I think that we should be careful about making generalizations about Pharma funded studies. They often bring in new information, and ultimately, innovation. They should not be thrown out wholesale - we'd end up going back to the stone age.


David AllenBill, thanks for your thoughtful response. First, I want to make clear that I am not advocating against industry sponsored studies. Of course they bring in new information. Second, I absolutely agree that physicians need to read the studies to see what was done, and it is ultimately up to the doctor to make an informed decision.

Unfortunately, most docs only read the abstracts - if that - which as you know, do not include the weaknesses of the study described in the discussion session.  [Note as we proceed that Dr. Glazer does not address this point].


You are right about competing companes exposing the weaknesses of their competitors' brand-named drugs, but who is doing that as much for generics? [He does not address this point, either].

In the past drug companies have deep sixed studies with negative results and only presented the positive ones to the FDA and the public. Thankfully, I think this practice has been stopped. Replication of course is essential. [Ditto; this point is never addressed].


IMO, most of the studies of Lamictal may not be "rigged" but they are highly misleading. The outcome measure "time to next affective disorder episode" over [a period of] 18 months is almost meaningless. If a drug is prophylactic like lithium, then there wouldn't be many relapses at all in that period. If just 1% of the subjects actually responded to lamictal, it would beat placebo since the time to next episode would be over 18 months. This all could mean that we have much better drugs. But I've never seen this discussed in the articles.

I don't think it's a coincidence that we are hearing all this negative stuff about anti-depressants just as they are almost all going generic. I cannot prove it, but I suspect the drug companies are demonizing them hoping that doctors will prescribe atypicals instead. And that is exactly what I am seeing in patients referred to my residents' clinics. Interesting that the manufacturers of Paxil suddenly seemed to "discover" that they had strong evidence of teratogenicity [the drug may cause some birth defects in babies born to mothers taking the medication] from decades ago.

I believe the drug companies demonized benzo's after they went generic by wildly exaggerating their addictive potential. Whenever and wherever you see references to benzo's in the professional literature, there is an accompanying phrase to the effect that, "but of course they are addictive." I don't see any references to Atypicals that add, "but of course they can cause diabetes." And frankly, if I had to choose, I'd rather be addicted to a benzo than to insulin.

There is a widely-quoted study by Sachs et al in the NEJM claiming antidepressants don't work in bipolar patients, when any clinician who has used lithium for mania prophylaxis knows that they work wonderfully for bipolar depression as long as you have a mood stabilizer on board to prevent switching [from depression straight into a manic high]. The study does not say what percentage of the subjects had already failed other antidepressants, but the subjects were all referred by clinicians and were already on a mood stabilizer, and in the midst of an active episode of bipolar depression [meaning they were all being treated already but were not getting better, so they were referred for the study].

Only patients who had failed the two antidepressants used in the study were excluded; none of the patients who failed the multitude of other antidepressants were excluded. The article says some patients were tapered off of their other meds, but does not say what meds those were.

Furthermore, when some atypicals [antipsychotics like Abilify] became FDA approved for mania, patients on those meds were brought in as well as being on a "mood stabilizer." What all this means is that a significant percentage of the subjects in the study had failed at least one trial of an antidepressant, and perhaps two or three! And some also failed an atypical, which are being touted as anti-depressants these days. In other words, they used a treatment-resistant population and never said so, and shock of shock, placebo outperformed the antidepressants. I wrote e-mails to both of Dr. Sachs' email addresses nicely asking what percentage of the subjects in the study had failed other antidepressants. I never heard back.

That sounds like a rigged study to me.


William Glazer MDThanks David. Most of the examples that you refer to are ones in which there is transparency - you may question the authors' interpretation (or non-interpretation) of the data, but the reader is able to see from the details of the study what is going on. [Pharma knows jolly well that most psychiatrists are not going to question a bad outcome measure that sounds reasonable, although technically the study is not “rigged”]. I am currently pursuing your reference to the Sachs et al paper in the NEJM because you seem to feel that there is "rigging" in the first sense of the term as I discussed it above. I'll get back to you on that.

It's interesting how both you and I have our own preferred conspiracy theories about the attack on antidepressants. I seem to be blaming the psychologists and you are blaming the pharmaceutical industry (of course I am being overly dramatic here). But we certainly seem to be in a "fight or flight" response. We are both probably a little paranoid. :-)

David AllenBill, we may both be reacting to the attack on antidepressants because we both know they are damn good medications. It's OK to be paranoid when they are after you!!

Gary Sachs, M.D.
William Glazer MDDavid - getting back to the issues that you raised, I followed up on the NEJM article by Gary Sachs reporting the results of the STEP-BD study (Sachs et al: Effectiveness of Adjunctive Antidepressant Treatment for Bipolar Depression N Engl J Med 2007; 356:1711-1722 April 26, 2007). I spoke directly with Gary and shared your concerns with him. This is what he said, and he was happy to have me convey it through this medium. I hope it helps:

"In regard to the STEP-BD finding, the suggesting that it was somehow "rigged" surprises me. The study was an NIMH funded treatment effectiveness study. It sought to enroll a sample representative of treatment seeking patients. By all measures it did exactly that. If Dr Allen wants to say that our results may not generalize to treatment naive patients [patients who had never been exposed to any medications], he is correct. If he wants to say that treatment refractory patients are the one most likely to be seeking treatment, he may be right about that too. [These last two statements are an implicit admission that my primary concern about the study was spot-on correct - that most of the subjects were in fact treatment-resistant. 

However, notice how this specific wording, or a reasonable facimile thereof, is never clearly used.  And he still does not answer the basic question about what percentage of the subjects in the study had failed one or more previous antidepressants.  Nor does he address the rather blatant omission of a key descriptor of the subject population in the journal article describing the study].  While treatment responders would have little motivation to enter a usual RCT, STEP-BD also enrolled euthymic patients, Of course if patients stayed well, they would never have been eligible to the randomized acute depression study.

[This is an excellent discussion of why studies of antidepressants are so notoriously difficult to do, but it is in no way a discussion of whether his journal article was misleading.  Note also he uses the words “treatment-naïve” or “treatment-refractory” instead of the current psychiatric buzzword, “treatment-resistant.” This may seem like a trivial point, but many practitioner just lightly scan articles and discussions like this. Using the buzz word might get their attention more than the alternatives. It makes a difference!].

I also agree there are many bipolar patients that apparently respond to standard antidepressants in clinical practice. The problem is that this has never been demonstrated in an adequately powered clinical trial. [True. Just as Glazer had pointed out, he did not know with certainty how this study would come out until he did it. On the other hand, as I mentioned in the intro to this exchange, any experienced clinician would have predicted that it was extremely likely that this study would in fact come out just as it did. If it had not, one might suspect that it was poorly designed. This was a well-designed study of antidepressant response in treatment resistant patients].


Furthermore, placebo did not beat the standard antidepressant in STEP-BD. [Here he is talking about a completely unrelated part of the STEP-BD study. A sly diversion from the topic at hand]. However, since the psychosocial interventions, did beat the control condition, it is hard to argue that the study was "rigged" to favor patent medicines.  [I wonder: how many psychotherapists read about drug studies in the New England Journal of Medicine?  Dr. Sachs and PhARMA can rest assured that this point would not be widely reported.  Also, speaking as a practitioner and strong advocate of psychotherapy, doing psychotherapy with a patient in a real bipolar depression is a complete and utter waste of time.  As we all know, having zero energy, thinking at a snail’s pace, being totally overcome with an all-encompassing sense of helplessness and hopelessness, and a blanket sense of the utter futility of everything is a perfect recipe for a successful course of psychotherapy.  If subjects in the STEP-BD sample responded to it, I would have to question the diagnoses of the subjects.


My guess is that they may not have been, at the time of the study, in a bipolar depression at all but were reacting to purely environmental problems. 


Also, throwing a bone to psychotherapy is another frequent PhARMA speaker's tactic so they can claim a presentation was fair and balanced, even though it is done in such a way that most of an audience will ignore it.].

I wish I could have responded to every email about the STEP-BD results. I did the best I could but may have missed his inquiry."  [How convenient. All I asked in my TWO e-mails was, in a very pleasant and professional tone, what percentage of the subjects in the study had failed previous antidepressants.  I also signed it with my title, Professor of Psychiatry at UT, so it just was not just any old inquiry.  Guess he just missed them].

David AllenBill, that defense is completely inadequate. Why did the article not say what percentage of the sample had failed a previous antidepressant? We know from the STAR-D study that there is diminishing returns when a second and third antidepressant are tried. I guess you could say that the article was transparent, as Dr. Sachs seems to be arguing, because I could read between the lines and see what they did. Perhaps not including this statistic was an innocent oversite, but it sounds like when you talked to him he still did not answer the question. The line between rigging and being extremely misleading is what we seem to disagree about, but that's just semantics. "Opinion leaders" and throw away journals are already treating the idea that bipolar depression does not respond to antidepressants as an established fact, when it is obviously complete baloney.

William Glazer MDWith all due respect David, you are not creating a fertile ground for an academic discussion about the NEJM STEP-BD study by Dr Sachs and colleagues. [Gee, was I being too mean?]. Dr Sachs read your last comment and respectfully declined to continue the dialogue because he flet that you sounded like you had your mind made up and there was no room for productive collegial debate. [Why would I change my mind when he confirmed that my suspicions were correct? And he was still refusing to answer my central question].  

I re-read the Sachs article and I think that your focus in on the narrow question of the effect of the history of antidepressant exposure on the outcome is interesting, but it hardly succeeds in establishing that this NIMH funded study was "rigged".

First, and most importantly, this was a randomized trial. This means, as I know you know, that it is likely that patients who failed on previous antidepressants were equally exposed to one of the two study conditions: antidepressant or placebo. Second, the authors do report on the history of depressive episodes in the patients assigned to drug versus placebo. And there was not the slightest hint of a difference in the randomization. [All true, and completely irrelevant to the question I was asking and the issue at hand about the deception in the presentation of the data]. 

Third, Dr Sachs and colleagues IN THE ABSTRACT did not say that they had proven that antidepressants were ineffective in bipolar depression. The reported their finding and immediately called for additional long-term well designed studies. [Pharma does not need for Dr. Sachs to say that, and probably would not want him to.  Sachs was just a cog in the machine.  Pharma has a lot of other paid lackeys out to “spread the word,” and they will be the ones recommending atypicals as an alternative.  This is exactly what I mean: Dr. Sachs has plausible deniability].

I don't know what thought leaders or throw away journals you are referring to, but if you go to the original source - the Sachs et al NEJM article, I think you find a well-designed, study that adds incrementally to our knowledge. [Is he just playing dumb here?  He was featured in a whole series of CME tapes sponsored by drug companies called PsychLink.  Does he really not know that “opinion leaders” is the term big Pharma uses to describe their paid-off “experts?”  And if he hasn’t seen the idea that antidepressants don’t work in bipolar depression anywhere else, then he has his head in the psychiatric sand]. A casual read of that paper makes it evident that it does not recommend a practice policy. [Of course not.  It doesn’t have to.  See my commentary at the end of the last paragraph]. 

It is a fine piece of work that has no evidence of being "rigged" in the first sense of the term discussed above. How others interpret and spin this study may be grist for our mill here [MAY be??], but we are talking about high quality research. [It was high quality research that was presented in an extremely misleading way]. We certainly need to know more, and perhaps you can interest someone to design a study that specifically tests your hypothesis. [Sachs and Glazer’s responses in fact tested my hypothesis, and my hypothesis won]. 


David AllenAlso with all due respect, Bill, I do not understand how you can say that omitting from the journal article the highly relevant fact that the subjects in the study were mostly treatment-resistant patients - some probably highly so - is defensable. I'm asking a very simple and highly relevant question of Dr. Sachs about the journal article's presentation of the study, NOT the relative merits of the study itself. If he doesn't want to answer it, I think that speaks to the validity of my argument that the presentation was purposely misleading. I doubt his refusal to answer is because of my attitude. And do you mean to suggest that he doesn't know how his conclusions are being used in the field? Please.

[A Third Party] Dr Allen's comments make sense to me..... Am I missing something???

William Glazer MDSorry [third party], but I have taken it as far as I can take it. [He could not take it any further because he had lost the argument].