Pages

Showing posts with label epidemiological studies. Show all posts
Showing posts with label epidemiological studies. Show all posts

Tuesday, July 10, 2018

The Amazing Complexity of Environmental Research in Psychiatry





In my Psychology Today post of 12/24/12, Why Psychotherapy Outcome Studies are Nearly Impossible, I discussed the large number of variables that are not taken into consideration in those studies which bring any conclusions drawn from them into question. These include variations in therapist techniques that aren’t measured, sampling problems with people that can have wide variations in their proclivities and sensitivities, problems with finding an active control treatment, the lack of double blinding, and lack of complete candor by subjects.

The same types of issues apply to epidemiological research into environmental risk factors for various psychiatric disorders. Most studies try to measure the effect of a single environmental exposure on a single outcome—something that rarely exists in the real world.

In a “viewpoint” article from JAMA Psychiatry published online on June 6, 2018, by Guloksuz, van Os, and Rutten ("The Exposome Paradigm and the Complexities of Environmental Research in Psychiatry"),the authors discuss characteristics of the environment as they do function in the real world. They speak of multiple “networks of many interacting elements…”

Individuals are exposed to these elements as they accumulate over time, so that one single exposure usually means very little. Exposure also is “dynamic, interactive, and intertwined" with various other domains including those internal to individuals, what individuals do within various contexts, and the external environment itself—which is constantly changing. Last but not least, each individual attributes a different, and sometimes changing, psychological meaning to everything that happens to them. This meaning attribution can alter the effect of each environmental exposure dramatically.

Each environmental factor confers risk for a "diverse set of mental disorders." These factors are far from universal so that some people remain completely unexposed to them. They interact with each other so they are not independent. They are time sensitive. They are dose dependent even within similar environments, meaning individuals are not exposed to them at the same level. They can be subject to being confounded by each individual’s differing genetic propensities.

With all that to consider, drawing final conclusions from a few studies just does not cut it as real science. But the field tends to believe in those conclusions as if they were gospel.

Tuesday, October 2, 2012

More Astonishing, Cutting-Edge Research in Psychiatry


As we did on my post of November 30, 2011, it’s once again time to look over the highlights from my two favorite medical journals, Duh! and No Sh*t, Sherlock. 

As I pointed out in that post, research dollars are very limited and therefore precious.  Why waste good money trying to study new, cutting edge or controversial ideas that might turn out to be wrong, when we can study things that that are already thought to be true but have yet to be "proven"?  Such an approach increases the success rate of studies almost astronomically.

Psychiatric blogger Nassir Ghaemi agrees: "In some estimates, less than 10% of all NIMH funding is aimed at clinically relevant treatment research on major mental illnesses (i.e., schizophrenia or bipolar disorder). Further, that limited funding is sparingly distributed: the highly conservative, non-risk-taking nature of NIH peer review is well-known."

Here are some of the most interesting new findings reported in these journals.

Side Effects and Therapeutic Effects Are Not The Same Thing

A brilliant new study (http://www.reuters.com/article/2011/12/08/us-depressed-worse-idUSTRE7B72JM20111208) on one anti-depressant concluded: "In the first few months patients either responded to the treatment and improved or didn't and still suffered side effects." Really? People can get side effects from a drug but no benefit? That actually happens?!? I never knew.


The Charleston (WV) State Journal (3/27, Burdette) reports that "a report released last week by Auburn University shows that the high poverty levels and low educational attainment among women have a direct correlation to the region's high number of teen births." The media is so irresponsible! Why haven't they pointed out this correlation more often than the previous 13,000 time?



What?  Combat is more stressful than merely serving in the military??  But it looks like so much fun.

Listening to Loud Music Associated with Substance Abuse

The Los Angeles Times (5/22, Kaplan) "Booster Shots" blog reports that according to a study published online May 21 in the journal Pediatrics, "Teens and young adults who listen to digital music players with ear buds are almost twice as likely as non-listeners to smoke pot.” As a veteran of the San Francisco music scene in 1967, I just never noticed that the people in the audience at the Fillmore auditorium were smoking pot. I always thought that smell came from the incense they were burning, and that those funny cigarettes were just home-rolled tobacco. Additionally, their LSD use was greatly exaggerated. They were not hallucinating. Those light shows were just really amazing.

And on a related note:

Small Study: Medical Marijuana May Impair New Patients' Driving Skills.

Reuters (7/27, Pittman) reported that although it often goes unnoticed during sobriety tests, the use of medical marijuana at the typical doses used by AIDS, cancer and chronic pain patients causes users who have not yet built up a tolerance to cannabinoids to totter from side-to-side when driving, according to a study published online July 12 in the journal Addiction.  Well I’ll be!  Intoxicants impair driving skills?  Who knew?  Legislatures should look into doing something about this, or someone could get killed.

 Review: Negative Interactions with Staff Common Cause of Aggression on Psychiatric Wards

MedWire (5/26, Cowen) reported, "Negative interactions with staff are the most common cause of aggression and violence among inpatients in adult psychiatric settings," according to a review published in the June issue of the journal Acta Psychiatrica Scandinavica. Now come on!! Patients with schizophrenia are just naturally aggressive. It’s in their genes! Don’t let their completely flat affect and their total inability to organize a break out from a locked ward fool you.

Parental Fighting May Lead to Later Depression, Anxiety in Children

 

HealthDay (6/16, Goodwin) reported that "slamming doors, shouting and stony silences between mom and dad can really scar kids emotionally," according to a study published in the journal Child Development. Investigators found that "Kindergarteners whose parents fought with each other frequently and harshly were more likely to grow into emotionally insecure older children who struggled with depression, anxiety and behavior issues by 7th grade."  Here we go again. This parent bashing has just got to stop. We all know very well that behavior is controlled by genes and that environmental stress has absolutely zero psychological consequences.

And as long as we are on the subject of parent bashing, here’s some more evidence for this horrible trend:

Children's Adherence to Mental-Health Treatment May Depend on Parents' Perceptions

 

MedPage Today (8/4, Petrochko) reported, "Whether or not a child maintains a treatment for mental health may depend on parents' perceived benefits of that treatment," according to a 573-participant study published in the August issue of the journal Psychiatric Services. How many times can I stress this?  Parenting skills are absolutely irrelevant in determining the behavior of their children.

Tuesday, March 20, 2012

Immaturity Officially a Disease: You Saw It Here First

The kid in red is in the same grade and classroom as the other four


In my post of September 20, 2010, Immaturity in YoungChildren: Officially a Disease, I described two studies published in a very obscure journal, the Journal of Health Economics, that both found nearly identical data about the diagnosis of ADHD in school children.  In the these articles, two different research groups (Evans, Morrill, &Parente, 29, 2010 657–673; Elder, 29 2010, 641–656) using four different data sets in different states came to the same conclusion. 

In one, roughly 8.4 percent of children born in the month prior to their state’s cutoff date for kindergarten eligibility – who typically become the youngest and most developmentally immature children within a grade – were diagnosed with ADHD, compared to 5.1 percent of children born in the month immediately afterward. The study also found that the youngest children in fifth and eighth grades were nearly twice as likely as their older classmates to regularly use stimulants prescribed to treat ADHD!  The results of the second study were quite similar.

Translated into numbers nationwide, as Steindór summarized in his comment on my blog, this would mean that  between 900 thousand (Elder) and 1.1 million (Evans et al. 2010) of those children under age 18 in the US diagnosed with ADHD (at least 4.5 million) are misdiagnosed.  

Now, a year and a half later, another study, published in a more widely read journal and reported widely in the news, came up with the exact same conclusion.  (“Influence of relative age on diagnosis and treatment of attention-deficit/hyperactivity disorder in children” by Richard L. Morrow, et. al., Canadian Medical Association Journal, published on line March 5, 2012). 


In a cohort study (a study of a group of individuals with something in common followed over time) of more than 900,000 Canadian children, researchers found that boys born in the month of December (the cutoff birth date for entry to school in British Columbia) were 30% more likely to be diagnosed with ADHD than boys in their grade who were born the previous January.
This number was even more dramatic in the girls, with those born in December 70% more likely to be diagnosed with ADHD than girls born in January.
In addition, both boys and girls were at a significantly higher risk of being prescribed an ADHD treatment medication if they were born in the later month than in the earlier one.
 "It could be that a lack of maturity in the youngest kids in the class is being misinterpreted as symptoms of a behavioral disorder," said lead author Richard L. Morrow.  


Could be?  About about “is?”
Some of these behaviors could include not being able to sit still, not being able to focus and listen to the teacher, or not following through on a task, he added.


"You wouldn't expect a 6- and 9-year-old to behave the same way, but we're often putting a 6- and 7-year-old in the same class. And we're learning that you can't expect the same behaviors from them," he added. "We would like to avoid medicalizing a normal range of childhood behaviors."  No sh*t!


This problem has been complicated recently by the fad of "redshirting" children for kindergarten: overachieving parents purposely starting them at age six rather than five in order to give them a competitive advantage academically over their classmates.  Now children in the same class may be as much as two years apart in age.
The study authors went on to  note that potential harms of overtreatment in children include increased risk for cardiovascular events, as well as effects on growth, sleep, and appetite.  There was no mention of the harm of making this diagnosis and using these potentially toxic medication instead of investigating and addressing possible psychosocial reasons for “hyperactivity” such as a chaotic family environment or abusive and/or inconsistent parenting practices.
This brings up the issue of the risk to the heart and the rest of the cardiovascular system posed by stimulant use.  There have been several studies recently published that have been reported in both the medical and lay media that claim that this risk is minimal.  


This is in an interesting contrast to the publicity about an article, published this week in BMJ Open (the online version of the British Medical Journal)  that purported to show that the use of sleeping pills increases the risk of dying from all causes by a factor of 4 over just two and a half years.  Sleeping pills are generally regarded as far less dangerous and less likely to be abused than stimulants.  The FDA categorizes benzos as "Schedule IV" (lower likelihood of abuse) and stimulants as "Schedule II" (most likely to be abused short of the illegal "Schedule I" drugs).
That study about sleeping pills seemed to me to be a bit hard to believe, especially since epidemiological studies are notoriously unreliable.  But even if the numbers are valid, the fact that the risk of death from all causes increases most likely means that there is  some other characteristic, or a bunch more characteristics, of the population of people who are prescribed sleepers that are not characteristic of other populations. Those additional factors might explain the findings.
As for stimulants, in the February, 2012 issue of the American Journal of Psychiatry, there is an article on methylphenidate (Ritalin and its variations) and risk of heart problems in adults. Using a large medication database, researchers matched about 44000 methylphenidate (MPH) users and about 176,000 controls. 


They looked at main the incidence of a cardiac event defined as a myocardial infarction, stroke, ventricular arrhythmia, or sudden death. They found a 117% increased risk - or over double the risk – in the Ritalin group. After adjustment for some potential confounding factors, the risk was still 84% higher.
The news stories about the study on the benzo’s seemed to be meant to scare people out of using them, while the stories about increased risk in stimulant users seemed to be meant to reassure people about using them.  Of course, both of these studies described relative risk and not absolute risk (See my post Stats.com from November 2, 2011).   


This means that  “double the risk” means the risk might go from, say, a tenth of a percent to two tenths of percent.  Double a very small risk is still a very small risk.  The absolute risk in this example would have gone up just one tenth of one percent.  Still, if millions of people are getting the prescriptions, this increased risk can still turn out to apply to a sizeable number of people.

Physicians will not be able to see the increased risk in their clinical experience.  As Nassir Ghaemi says,They don't happen in 10-20% of patients in our practice; they happen in 1-2% (or 0.1-0.2%), and so, the average clinician, faced with a welter of patients, doesn't make the causal connection.”

The question should be, what are the risks versus the benefits from taking the medication.  For sleeping pills, for instance, one might want to know if there is a much larger increased risk of death for people who are sleep deprived.  For example, before the practice was stopped, medical interns would routinely work 36 hour shifts.  Fatal accidents on the car trip from the hospital back home were not all that unusual.


Then there is the whole question of other, non-pharmacological treatments, which is relevant for both the use of sedatives and stimulants.  Of course, they do not work for everyone either.

An editorial in the same issue of the American Journal of Psychiatry as the study of Ritalin in adults sounded reassuring about stimulant use.  Based on that study, I’m not so reassured.