Pages

Friday, July 31, 2015

If Free Will Does Exist, How Often Do We Employ it in Our Daily Lives?




In my post of 7/31/10 I discussed a somewhat widely-publicized study published in 2008 in Nature Neuroscience, in which researchers using brain scanners could predict people's very simple decisions seven seconds before the test subjects were even aware of what their decision was. 

The concern raised at that time was whether some totalitarian government might start arresting people based on a determination of what they were going to do at some time in the future, like the precrime unit in the movie Minority Report.


This study still comes up in philosophical discussions of a different issue - whether people even really have free will at all, or if we are more like pre-programmed robots.

The decision studied in the experiment — whether to hit a button with one's left or right hand —may not be representative of complicated choices that are more integrally tied to our sense of self-direction. Regardless, the findings raise interesting questions about the nature of self and autonomy: How free is our will? Is conscious choice just an illusion?

"Your decisions are strongly prepared by brain activity. By the time consciousness kicks in, most of the work has already been done," said study co-author John-Dylan Haynes, a neuroscientist who was at the Max Planck Institute. Haynes updated a classic experiment by Benjamin Libet, who showed that a brain region involved in coordinating motor activity fired a fraction of a second before test subjects chose to push a button. Hayne's study showed a much large time gap between a decision and the experience of making it.

In the seven seconds before Haynes' test subjects chose to push a button, activity shifted in their frontopolar cortex, a brain region associated with high-level planning. Soon afterwards, activity moved to the parietal cortex, a region of sensory integration. Haynes' team monitored these shifting neural patterns using a functional MRI machine.

Taken together, the patterns consistently predicted whether test subjects eventually pushed a button with their left or right hand -- a choice that, to them, felt like the outcome of conscious deliberation. In fact, their decision seems to have been made before they were aware of having made a choice.

So does this mean the feeling and belief we have that we have free will is just an illusion?

Well possibly, but probably not. For one thing, as mentioned, the experiment may not reflect the mental dynamics of much more complicated and/or emotionally meaningful decisions. Also, the predictions were not 100% accurate. Might free will enter at the last moment, allowing a person to override a subconscious decision?

But there is a much bigger problem with drawing conclusions about free will from this type of experiment. We usually do not employ free will in the sense of making conscious choices when we engage in the vast majority of our usual daily activities. If individuals had to weigh the pro's and con's of their every move as they negotiated their lives, or if they had to stop and think about how to behave before doing the most routine activities, so much time would be spent on that that they would be nearly paralyzed. 

Most of our "decisions" are based on environmental cues which are processed subconsicously and which then trigger habitual behavior without requiring any thought on our parts at all. 

Through our life experiences, we all build mental models of our environment called schemas which then, when cued by environmental triggers, automatically kick in. Cues elicit a certain well-rehearsed repertoire of responses.

To understand this, think of your daily drive to work. Most drivers, while negotiating a familiar route, have at one time or another come to the realization that they had not been paying the least attention to what they had been doing for several minutes. Nonetheless, they arrived at their destination, with almost no recollection of any of the landmarks that they had passed.

Surely, we have the option to choose to make a turn that would take us away from our intended destination, but, under most circumstances, why would we waste our time even considering something like that?

A lot of predictable situations like this are handled on "automatic pilot." Gregory Bateson observed that ordinary situations and "constant truths" are assimilated and stored in deep brain structures, while conscious deliberation is reserved for changeable, novel, and unpredictable situations.

This does not mean, however, that rigid behavior cannot be overcome by conscious deliberation. In neurologically intact individuals, the more evolutionarily-advanced part of the human brain, the cerebral cortex, can override even the most reflexive of gross motor behavior.

So perhaps the brain processes described in this study are the ones that determine whether or not an individual goes on automatic pilot, or has to stop and think about potential unanticipated consequences. React in the usual habitual way, or re-assess? When it comes to pushing an inert button in a lab, the consequences for the subject are pretty predictable: there will not be any.

Unless the subject were purposely trying to foul up the experimenter's protocol, which would be a strange thing to want to do in an experiment with no social consequences to the subject, why would they extend brain energy in making a choice? They would not. They would just "go with their gut."

Therefore, from the data in this study alone, it is not possible to know which interpretation is correct: the experimenter's, or the one I just suggested.

Maybe you don't have free will, maybe you do. As I said in the earlier post, I am pretty sure I do.

Tuesday, July 21, 2015

Groupthink: How Even Scientists Con Themselves in Order to Fit In


australianclimatemadness.com


In his brilliant book, The Righteous Mind, Jonathan Haidt argues convincingly that logic evolved in humans not to establish the truth about the world or to establish facts, but to argue for ideas that benefit the kin and ethnic groups to which we belong, as well as to maintain a good reputation within those groups. My colleague Gregg Henriques calls this the Justification Hypothesis: logic is used to justify our group norms.

Many of our beliefs are based not on facts or reason at all and in fact seem to be impervious to them. They are instead based upon either our groupishness (the opposite of selfishness). For almost all of us, it is generally more important for us to look right than to be right.

This type of reasoning appears at the level of the individual, where it is called defense mechanisms and irrational beliefs. It appears at the level of the family or kin group, where it is called family myths. It also exists at the level of cultural groups, where it is called theology. Or if it is not your particular brand of theology, then it is called mythology.

Another name for this phenomenon in general is groupthink. We cede the right to think for ourselves for the sake of our group, and we often try to discourage our intimates from thinking for themselves for the same reason.

Even scientists are not immune. So what are some of the mechanisms by which they do this to themselves and to other people? That is the topic of this post.

First, a brief review of some previous posts. As I described in one post, I realized a long time ago that the so-called defense mechanisms discussed by the Freudians and the irrational thoughts catalogued by cognitive therapists (CBT) had a purpose that was not only intra-psychic but interpersonal as well. 

Defense mechanisms are defined as mental processes initiated, typically subconsciously, to avoid ideas or impulses that are unacceptable to our value system, and to avoid anxiety. Another name for this is mortification. We may, for example, compulsively try to act in the opposite way that the unacceptable impulse would dictate (reaction formation), or displace our anger from one person onto another, safer one.

Their interpersonal purpose is to screen out beliefs and impulses that are threatening to the kin group, which is also why they are threatening to the individual within the kin group.

The irrational thoughts of CBT, which they attributed to humans being basically irrational, also functioned much like the defense mechanisms. If you, for example, "catastrophize" about what might happen if you indulged an impulse that your kin group does not approve of (by, say imagining the worst possible outcome of doing so), you will indeed scare yourself away from engaging in it. (Of course, the CBT folks reject the whole concept of defense mechanisms - I recall a somewhat sarcastic reply from cognitive therapy pioneer Albert Ellis when I brought this up at one of his talks).

On a related note, there are the logical fallacies that are enumerated by logicians and which are well known to members of college debate squads. An example is post hoc reasoning, which assumes wrongly that if event A is quickly followed by event B, then it is true that A caused B. I saw patients engage in many of these fallacies when confronted with the negative consequences of the behavior that seemed to be demanded of them by their families. So, I believe, the logical fallacies can also be used as defense mechanisms - specifically designed to avoid troublesome questions about cherished beliefs that on the surface are simplistic at best and preposterous at worst.

As mentioned above, scientists are not immune from groupthink and groupishness. In fact, they are as nearly as likely as anyone else to employ them. I witnessed many times in scientific debates how the debaters would subtly employ various techniques and mind tricks to silence critics of their studies or ideas.

A few examples among many:

a) Black-and-white, or all-or-none thinking. Biological psychiatrists seem to think that everything in the DSM diagnostic manual is a brain disease, whereas the anti-psychiatry folks believe that nothing is, and the listed diagnoses are all just alternate lifestyles, different ways of looking at the world, or reactions to trauma.

b) Arguments that advance the idea that, because many parts of the thinking of someone like, for instance, Freud, were totally off-base (like "penis envy" and his theories about homosexuality), that therefore ALL of his ideas were wrong (including such obviously real things as intra-psychic conflict and defense mechanisms).

c) Stating facts about the results of studies without describing certain contextual elements that put those facts in a different light. A great example I have already blogged about is how the leader of the National Institute on Drug Abuse spoke about experiments with monkeys showing them pulling a level to get cocaine until they died - while neglecting to mention that the animals were in solitary confinement with nothing else to do. When that was not the case, they behaved very differently.

d) Conflating the issue of how a phenomenon arises or what it means in the scheme of things with the issue of whether the phenomenon even exists at all. For example, CBT'ers would deny that the concept of resistance, a psychoanalytic idea that states that people are often highly invested in their psychological symptoms and resist change - is a real phenomenon. All the while, they failed to report in their case studies the high level of non-compliance with CBT homework assignments by their patients in treatment.

e) Grossly exaggerating the strength of certain research findings while completely ignoring the study's weaknesses and problematic assumptions.

f) Conflating another scientist's conclusion about the significance of a clinical anecdote with the description of the anecdote, and not considering what else the anecdote might mean.

g) Scientism: the idea that randomized placebo-controlled studies of something are the end-all and be-all of science, and that everything else is just anecdotal and not science at all. I answer those who make this argument by asking for volunteers for a randomized placebo-controlled study on whether parachutes reduce the incidence of deaths and injuries during falls from airplane flights. 

I also point out that scientism creates a problem when it is only a slight exaggeration to say that in order to study an important psychological phenomenon like self-deception with a large enough study sample and within a reasonable time frame, you would pretty much have to ask people about their opinion of themselves. Sorta defeats the goal of the study, doesn't it? So does this mean that therefore studying self-deception should be completely off limits to scientists? 


Scientists will often accuse other scientists of doing these things while doing them themselves. This is projection - another defense mechanism. These mental mechanisms are so pervasive in human beings that we are quite likely to find at least some of them in any scientific discussion. 

Discerning readers will no doubt find examples in which I do some of it in my posts on my blogs. 

Friday, July 10, 2015

Studies that Show Drugs are Ineffective are Often Deep-Sixed




The production and distribution of scientific information has, of late, frequently become a broken process. Richard Horton, editor of the pre-eminent medical journal The Lancet, recently observed, “The case against science is straightforward: much of the scientific literature, perhaps half, may simply be untrue. Afflicted by studies with small sample sizes, tiny effects, invalid exploratory analyses, and flagrant conflicts of interest, together with an obsession for pursuing fashionable trends of dubious importance, science has taken a turn towards darkness.”

This is a problem in all of science. When it comes to industry-sponsered science, industry employs well thought out and highly researched processes for inducing scientists, doctors, and the general public to buy into highly biased ideas that are good for the industry's bottom line. This of course is true for the pharmaceutical industry just as it is with oil companies and the like. 

Not to mention the worst and by far the most dangerous-to-your-health offender of all, the managed care health insurance industryGeorge Dawson does an superb job tearing them a new one on his blog. But this post is about the pharmaceutical industry, other scientists who do drug studies, and the politics of acceptance of studies by medical journals. 

A well-designed study - manipulation of study design is a 'hole 'nother issue - that tries to measure whether a drug is effective for treating certain symptoms or a certain condition can turn out, broadly, in two possible ways. The drug is either shown to be effective for the symptom or condition, or it is shown to not be, or to be not very. 

Any single study's result may be invalid due to a problem with the sample of subjects picked, which may be unrepresentative of the whole population of subjects exhibiting the particular condition or symptom under study. It can also be just a coincidence, what with the standard 5% chance of that being so. So in general positive results need to be replicated in several different studies before the US Food and Drug Adminstration (FDA) will approve the drug for public consumption.

But what happens if a drug company or academic scientists who are fighting to get their studies published so they can attain tenure submit for publication only the studies that seem to show that a drug works, and then does not submit the other studies that had a negative result? Well, of course then it looks to everyone like the evidence for the drug's efficacy is much stronger than it actually is.

It is also true that, even when a negative study is submitted for publication, the editors of journals often reject the manuscript. Editors seem to think that, when a study did not show a positive result, there is no reason to publish it, supposedly because it does not tell us anything. That's ridiculous, of course - negative studies can tell us what is not true, and are essential to good science. 

This bias against so-called "negative" studies happens all over science, by the way, whether studies are industry sponsored or not. I remember renowned biologist Stephen Jay Gould decrying this in a book from decades ago.

In order to deal with the problem of negative studies not seeing the light of day as it applies to drug studies, a 2007 U.S. Federal law required study authors to report the results of all of their clinical trials to a public website. The website is clinicaltrials.gov, which draws 57,000 visitors a day, including people who are confronting serious diseases and looking for experimental treatments.

The law was enacted also because of public concern that a failure to report negative results could harm participants in similar studies by failing to warn them of possible risks.

The Food and Drug Administration Amendments Act requires sponsors of most clinical trials to register and report their basic summary results within 1 year of either completing data collection for the primary outcome or of terminating. Failure to report study findings is supposedly punishable by sanctions including civil penalties of up to $10,000 per day and loss of funding.

So how are we doing? Not so good.

According to a widely reported story, a study from Duke University finds that five years after the reporting law took effect, only 13 percent of scientists running clinical trials had reported their results! The article about this study was published online in the New England Journal of Medicine.

Only 13.4% of investigators reported their results within 1 year, and only 38.3% reported their results at any time during the study period (N. Engl. J. Med. 2015;372:1031-9). Moreover, “despite ethical mandates, statutory obligations, and considerable societal pressure, most trials that were funded by the National Institutes of Health (NIH) or other government or academic institutions ... have yet to report results at ClinicalTrials.gov, whereas the medical-products industry has been more responsive to the legal mandate,” the researchers explained.

Interesting that the pharmaceutical industry is doing somewhat (although not a whole lot) better than the NIH-funded scientists on this score.

At 1 year, the rate of reporting was 17.0% for industry-sponsored trials, 8.1% for NIH-funded trials, and 5.7% for other government- or academically funded trials. The corresponding rates of reporting at 5 years were only slightly better, at 41.5%, 38.9%, and 27.7%, respectively.

According to Clinical Psychiatry News, despite the regulation’s threat of penalties, no enforcement has yet occurred, the researchers noted, in part because this portion of the FDA Administration and Amendments Act is still under public discussion and hasn’t been finalized. 

Anyone who wants to contribute towards changing this situation can do so at Alltrials.net.

The lack of publication of negative studies is not the only strategy employed by big Pharma to bias everyone's impression of their drug's effectiveness study data. Some other tricks they employ include:
  • Publishing positive studies more than once by using journal "supplements."
  • Conducting a study at multiple locations and then publishing the results of the individual locations as if they were separate trials - and doing so selectively if that makes the drug look better.
  • Publishing different measures of drug efficacy at differnt times to give the impression that the results published later are from a new or different study.
  • Following study patients for longer and longer time periods and then publishing the results from each time period separately, again making it look like there was more than one study.
  • Publishing positive results in major or more prestigious journals and negative or neutral studies in more obscure journals.
  • Combining the results of multiple trials in ways that are more favorable than any individual study in its own right.

Let the buyer beware!