February 22, 2011

The Decline Effect Is Stupid

no_correlation.jpg
we were surprised to find the data fit well within the two axes.  Further research is needed


Is there something wrong with the scientific method? asks Jonah Lehrer in The New Yorker. 

The premise of the article is a well known phenomenon called the Decline Effect.  As described in the story, that's when exciting new results, initially robust, seem not to pan out over time.  Today a series of studies shows X, next year studies shows less than X, and in ten years it's no better than nothing.

To be clear, this is what the Decline Effect is not: the finding of better data that shows your initial findings were wrong.  The initial findings are right-- they happened-- but they happen  less and less each time you repeat the experiments.  The Decline Effect is a problem with replication.

An example is ESP: the article describes a study in which a guy showed remarkable ability to "guess which card I'm holding."  He was right 50% of the time.   That happened.  But in subsequent experiments, he could do it less.  And less.  And then, not at all.

Many critics of Lehrer's article read this and say, a ha! the real explanation is regression to the mean.  Flip a coin and get heads nine times in a row: it could happen, but if we flip that coin enough times we will see that it is ultimately 50/50.

But that explanation is incorrect, the article explicitly states that the Decline Effect is not regression to the mean.

The most likely explanation for the decline is an obvious one: regression to the mean. As the experiment is repeated, that is, an early statistical fluke gets canceled out... And yet Schooler has noticed that many of the data sets that end up declining seem statistically solid--that is, they contain enough data that any regression to the mean shouldn't be dramatic. "These are the results that pass all the tests," he says. "The odds of them being random are typically quite remote, like one in a million. This means that the decline effect should almost never happen. But it happens all the time!..." 

And this is why Schooler believes that the decline effect deserves more attention: its ubiquity seems to violate the laws of statistics.

Lehrer believes that the Decline Effect is an inexplicable byproduct of the scientific method itself.

So?  What gives?

By now, many scientists have weighed in on this article, offering the usual list of explanations-- publication bias, selection bias, regression to the mean.  But while these are real problems in the pursuit of science, the real explanation of the Decline Effect goes unmentioned.

A hint of "what gives" is contained in the rest of Schooler's quote, above:

...This means that the decline effect should almost never happen. But it happens all the time! Hell, it's happened to me multiple times."

The true explanation for the Decline Effect is one no one cites because the place you would cite it is the cause itself.  I am not exaggerating when I say that the cause of the Decline Effect is The New Yorker.


II.
 
The Decline Effect is a phenomenon not of the scientific method but of statistics, so right there you know we are out of the realm of logic and into the realm of  "well, this sort of looks like a plausible graph, what should we do with it?"   Here's the article's money quote:

But now all sorts of well-established, multiply confirmed findings have started to look increasingly uncertain. It's as if our facts were losing their truth: claims that have been enshrined in textbooks are suddenly unprovable. This phenomenon doesn't yet have an official name, but it's occurring across a wide range of fields, from psychology to ecology.

A wide range of fields from the almost entirely made-up to the slightly less made-up are losing their "truth?"  This phenomenon isn't occurring in physics.  You could (and people did) build a Saturn V launch platform on the unscarred edifice of Maxwell's equations, and then 40 years later build an iPhone on top of that same edifice.  It's amazing what you can do with the black magic of electromagnetic theory.

Psychology, e.g., is different, because it attempts to model the particular minds of some humans at this particular time in this particular culture, and those models may apply 3 or 3000 years from now, or they might not.   Ecology attempts to form a static model of the dynamic relationship of constantly evolving organisms to each other and their environment which we are wrenching to and fro in real-time.  But there is no static "reality" in these fields to observe.  In these soft sciences, the observation of reality doesn't just change the results, sometimes  the observation actually changes the reality almost completely

In these regression sciences, we throw a ton of data into Visicalc and see what curves we can fit to them. And then, with a wink and a nod, we issue extremely broad press releases and don't correct the journalists or students when they confuse correlation with causation.  We save that piercing insight for the cushy expert witness gigs.

The problem isn't that the Decline Effect happens in science; the problem is that we think psychology and ecology and economics are sciences.  They can be approached scientifically, but their conclusions can not be considered valid outside of their immediate context.  The truth, to the extent there is any, is that these fields of study are models, and every model has its error value, it's epsilon that we arbitrarily set to be the difference between the model and observed reality.  Quantitative monetary theory predicts that given this money supply and this interest rate, inflation should be 2%, but inflation is actually 0.4%.  Then let's just set epsilon to -1.6% and presto!   Economics is a Science.

III.

To make its point about the Decline Effect-- and unintentionally making mine about science--  the article predictably focuses on the psych drugs that we hate to love to take, that keep the McMansions heated and the au pairs blondily Russian.  "They were found, scientifically, to be great, and now we know, scientifically, that they're not!"  Medicine is not a science, and despite the white coats and antisocial demeanor doctors are not scientists.   Docs and patients both need to get that into their heads and plan accordingly.   That why we say doctors practice medicine.  If medicine was a hard science, doctors would not have been surprised and puzzled by the effects of some of these drugs.  You can show me Powerpoint slides of depression rating scales for as long as the waitress keeps refilling my drink, but none of that "science" explains why imipramine doubles the mania rate, Depakote does nothing to it, and Zoloft lowers it, with apparent disregard for their scientific classifications. 

The problem isn't the Decline Effect, the problem is you believed the data had the force of  F=ma.  No one should be surprised when medical "truths" turn out to be wrong-- they were never true to begin with.  And if you made sweeping policy proclamations based on them, well, you got what you paid for.

IV.

But for all this imprecision, the criticism-- by folks like Jonah Lehrer-- directed at the "social" sciences is even worse.  Eggheads are collecting data in routine and predictable ways.  They are at least consistently using statistical analysis to analyze that data.  It isn't art history by postdocs with warez Photoshop.

So when I read this, I have to manually push in my temporal artery:

Many researchers began to argue that the expensive pharmaceuticals weren't any better than first-generation antipsychotics, which have been in use since the fifties. "In fact, sometimes they now look even worse," John Davis, a professor of psychiatry at the University of Illinois at Chicago, told me.
Shiver me timbers.  Okay, Professor Davis, now that your conclusion about the inferiority of the expensive drugs has been read by an audience twenty-five times larger than that of any study you've ever read, let alone written, can you please show us that data that supports your conclusion that atypicals are less efficacious?  Oh, that's not what you meant.   I'm confused, what do you mean by "worse?"  Wait, were you talking about depression or schizophrenia?  OCD?  I'm lost, let's back up.  And while you're at it, please define for us/Jonah Lehrer the other technical terms: "sometimes," "they," "now," "look," and "even," because I have no idea what they hell they mean in this context, and, big money down, you don't either.

This is where the "scientific method" is breaking down.  Not in the lab or at the clinical trial.  It's breaking down in the sloppiness of the critics.  If any researchers want to argue about the efficacy of new drugs over the old ones, there are ways and places to do that.  The New Yorker is not among them, because it lets scienticians get away with sloppy soundbites, and leaves anywhere from nine to 3M layman readers with the impression that scientists "know" "the newer meds" are "worse."

And the moment you talk to The New Yorker, your misinterpreted statistical association becomes truth.  Certainly for the layman's mind, but also in the mind of the Professor.  I'm going to bring up Depakote again until I get a public apology-- do you know how many times a day I have to correct psychiatrists that Depakote does not have "a lot of studies" supporting its efficacy in maintenance bipolar-- let alone an actual indication?

Left alone in his office and a stack of contradictory papers, he probably wouldn't be so flippant  about it all.  It's slow, excruciating, unexciting work that is the pursuit of science.

But that won't get you any grant money, let alone quoted in The New Yorker.

V.

An example:

What Møller discovered is that female barn swallows were far more likely to mate with male birds that had long, symmetrical feathers. This suggested that the picky females were using symmetry as a proxy for the quality of male genes. Møller's paper, which was published in Nature, set off a frenzy of research. Here was an easily measured, widely applicable indicator of genetic quality, and females could be shown to gravitate toward it. Aesthetics was really about genetics....In the three years following, there were ten independent tests of the role of fluctuating asymmetry in sexual selection, and nine of them found a relationship between symmetry and male reproductive success. It didn't matter if scientists were looking at the hairs on fruit flies or replicating the swallow studies--females seemed to prefer males with mirrored halves.

That's what Lehrer wrote. I know you didn't read it all. Here's what you read:

"
Females seem to prefer symmetric males."

The actual study suggested nothing about what the picky females were doing.  Lehrer inferred it.  By the time we get to the end of the paragraph all the reader remembers is that women prefer to have sex with symmetric guys, which is simply, undeniably, not true.  But none of the studies in that paragraph every concluded that.  They each made specific conclusions about the specific creature they were studying.  And if you think I'm splitting hairs, then you are the reason for the "Decline Effect."

Scientifically detected associations, in specific situations and contexts, are then generalized by the popular press-- or at least by the profession's internal pop culture-- and those generalizations are used as working knowledge. Those generalizations, which were never true, are the starting point for the future decline in effect that Lehrer is worried about.

When the article then goes on to describe the breakdown of this sweeping generalization in studies after 1994 (on other species), it attributes that to the Decline Effect.  It's not.  When you look at the studies together, what you should have inferred is "symmetry is an associated  factor in mate selection by females in only some species and not others and more research is need to explain why."  Instead, the article attributes its inability to summarize the variety and complexity of nature in a 140 character Twitter message to an underlying failure in the 500-year-old guiding principle of science.

Worse, as the article points out, sometimes journals want to publish only confirmatory findings, which set the stage for the discovery of a Decline Effect later on.  But the article doesn't go far enough: they're not looking for confirmation of a previous study, they are looking for confirmation of a sweeping generalization.  Not: "Zyprexa is more efficacious on the PANSS  than Haldol for schizophrenia," but "Don't we already know atypicals are better than typicals?"  And then those same journals, in the future, will only want negative data because their new sweeping generalization will be popular at Harvard via a grant from NIMH, all the Pharma guys moved on to Ohio.  That's not the Decline Effect: it's a pendulum swinging wildly from one extreme to the other, over a pit, in which is tied a guy.  You're the guy.


V.

Here's an example of how sloppy science becomes enshrined as "truth" by popular press outlets like The New Yorker.

In 2001, Michael Jennions, a biologist at the Australian National University, set out to analyze "temporal trends" across a wide range of subjects in ecology and evolutionary biology. He looked at hundreds of papers and forty-four meta-analyses (that is, statistical syntheses of related studies), and discovered a consistent decline effect over time, as many of the theories seemed to fade into irrelevance.

Look at that sentence, inadvertently hitting on the truth: the decline effect happened as the theories became irrelevant-- not the other way around.  The question isn't what does science say is true; the question is, what does the author want to be true?

But how can the author will a meta-analysis to show what he wants it to show?  Maybe he could manipulate an individual study, wouldn't a "study of studies" be immune to his dark sorcery?

Imagine a study of Prozac vs. placebo in 10000 patients, and Prozac is awesome.  Imagine two more studies, each with 6 patients, and Prozac doesn't beat placebo in those. I now have three studies.  My meta-analysis concludes: "Prozac was found to be superior to placebo in only a third of studies."  Boom-- Associate Professor.

When meta-analyses look at only a few studies (e.g. N=4), if even one of them is a poorly designed study you can overwhelm-- or purposely extinguish-- what might actually be a real effect.

In theory, researchers are supposed to be vigilant about the kinds of studies they lump together, making sure they are all similarly designed, etc.  In practice, researchers are not, on purpose.  Researchers all know what they want to find, and maliciously or unconsciously the studies to be included are selected, and, surprise, the researcher's hypothesis is supported.   I have a blog full of examples, but conduct your own experiment: take any meta-analysis, look only at the author's name, find out where he works-- and guess everything else.

While you're wasting your time with that, that author of that meta-analysis is talking to The New Yorker and changing reality, "well, studies have shown that..."

V.

This is going to get worse as the internet allows for popular discussion but not for access to the primary data.  I am contacted all the time by the media, "hey, what do you think about the new study that finds that women are hotter when they're ovulating?"  I try to drop some knowledge in a media friendly way, but at least a third of the time the reporter just wants me to a agree with that atrocious study and speculate wildly.  "Do you think it's because their boobs get bigger?"  Let's find out.

It's easy to go through Lehrer's examples and identify the culprits of the supposed Decline Effect, but the best example of why "science" goes bad is, not surprisingly,  offered by Lehrer himself.  In (brace yourself) Wired, Jonah Lehrer answers some questions about his New Yorker article.  Recap: his premise is that the Decline Effect is real, occurs in all sciences,  may be a function of the scientific method itself, and eats away at even the most robust findings.

Question 1: Does this mean I don't have to believe in climate change?

Me: I'm afraid not. One of the sad ironies of scientific denialism is that we tend to be skeptical of precisely the wrong kind of scientific claims.

Get that? 

Instead of wasting public debate on creationism or the rhetoric of Senator Inhofe [critic of climate change], I wish we'd spend more time considering the value of spinal fusion surgery, or second generation antipsychotics, or the verity of the latest gene association study.

Jonah Lehrer is the Decline Effect.  I think he is a good and earnest person, and I know he was previously a scientist himself, but he ultimately grades the science he's not knowledgeable about based on value judgments.  Which is fine, it's his life, though I wonder if deep down he believes it.  If he goes psychotic, will he actually want me to give him Haldol over Abilify?

The trouble for the Earth is... he writes for The New Yorker. And Wired.   Which means that his value judgments carry more weight than the science itself.

If they didn't, I, and those who are real scientists, wouldn't have to explain why the Decline Effect doesn't exist, I wouldn't have to waste time rebutting his article.

But I do.  That's the problem.

---
You might also like:

Do video games cause violence?

Is Science Just A Matter Of Faith?







Comments

Nothing to worry about. The... (Below threshold)

February 22, 2011 6:04 PM | Posted by Dan Dravot: | Reply

Nothing to worry about. The Decline Effect is real, but it will be increasingly difficult to replicate over time.

Vote up Vote down Report this comment Score: 114 (118 votes cast)
"This phenomenon isn't occu... (Below threshold)

February 22, 2011 6:32 PM | Posted by Kyle: | Reply

"This phenomenon isn't occurring in physics."

Not too sure about that: http://en.wikipedia.org/wiki/Oil_drop_experiment#Millikan.27s_experiment_and_cargo_cult_science

Vote up Vote down Report this comment Score: 5 (17 votes cast)
You, bastard, are now one o... (Below threshold)

February 22, 2011 6:48 PM | Posted by Wax Banks: | Reply

You, bastard, are now one of my (perhaps just Internet) heroes.

Vote up Vote down Report this comment Score: 6 (10 votes cast)
Thanks! This is one of the... (Below threshold)

February 22, 2011 7:14 PM | Posted by Thomson Comer: | Reply

Thanks! This is one of the best articles on corruption in the scientific methods I've ever seen.

Vote up Vote down Report this comment Score: 0 (18 votes cast)
This article went over my h... (Below threshold)

February 22, 2011 7:40 PM | Posted by G: | Reply

This article went over my head so I gave it an upvote.

Vote up Vote down Report this comment Score: -5 (35 votes cast)
This is one of the biggest ... (Below threshold)

February 22, 2011 8:15 PM | Posted by Joey: | Reply

This is one of the biggest reasons why my professors in grad school told me when evaluating articles to NEVER skip the methodology, which is what most people do. Statistics is a good tool but can be misused without proper care.

Vote up Vote down Report this comment Score: 29 (31 votes cast)
I remember when you posted ... (Below threshold)

February 22, 2011 8:26 PM | Posted by acute_mania: | Reply

I remember when you posted about dosage equivalencies for antipsychotics with graphs showing that between 65 to 80 percent dopamine receptor occupancy was needed for a drug to be antipsychotic. Greater than 80 percent occupancy will likely cause extrapyramidal symptoms without much of an increase in efficacy. This means 7-20 mg if Zyprexa, 2-5 mg of Risperdal and 1.5-3 mg of Haldol. Would two milligrams of haldol compare favorably with 10 of Zyprexa or 120 of Geodon in terms of efficacy and side effects? Has it even been studied?

Vote up Vote down Report this comment Score: 1 (3 votes cast)
Does Valproic Acid have any... (Below threshold)

February 22, 2011 9:13 PM | Posted by Gary: | Reply

Does Valproic Acid have any maintenance effects? I use to snow people for a few weeks, and then I taper it.

Vote up Vote down Report this comment Score: 0 (2 votes cast)
"...imipramine doubles the ... (Below threshold)

February 22, 2011 9:53 PM | Posted by Anonymous: | Reply

"...imipramine doubles the mania rate, Depakote does nothing to it, and Zoloft lowers it, with apparent disregard for their scientific classifications."

Citation on Zoloft lowering the mania rate, please?

Vote up Vote down Report this comment Score: 2 (4 votes cast)
Go read those articles agai... (Below threshold)

February 22, 2011 10:07 PM | Posted, in reply to acute_mania's comment, by Anonymous: | Reply

Go read those articles again.

Vote up Vote down Report this comment Score: 0 (4 votes cast)
Yes, without the original d... (Below threshold)

February 22, 2011 11:19 PM | Posted by Steven Bagley, MD: | Reply

Yes, without the original data (and a lot of time), we're at the mercy of the interpretations (and the interpretations of those interpretations), which are vastly larger in number, and only loosely related to any kind of reality.

Two minor quibbles.

1. Your example of regression to the mean is actually an example of the law of averages. Since each coin toss is independent of the other coin tosses, they have zero correlation and regress to the mean in a single step. The law of averages applied to coin tosses says the absolute error is unbounded, but the percentage heads converges to 50%, which is true but irrelevant to the Decline Effect.

In any case, I'm not convinced by the quote in the article that regression to the mean does not explain some of the Decline Effect, because the size of the data set doesn't matter, it's the size of the correlation. Any correlation less than one will regress. One needs a placebo control, not more data.

2. Your example of meta-analysis uses a very simple aggregation statistic called "vote counting" ("… only a third of studies."). It's a bad idea for all the obvious reasons, and hardly ever used in real meta-analyses, which typically use inverse-variance weighting. Studies with small n's have large variances, small weights, and count less in the calculation of an overall effect size. Not that meta-analyses can't be done wrong….

I think physics is different from psychology in so many ways, but the following is still recommended reading:
Assessing uncertainty in physical constants, by Max Henrion and Baruch Fischhoff
American Journal of Physics, September 1986, Volume 54, Issue 9, pp. 791
Abstract: Assessing the uncertainty due to possible systematic errors in a physical measurement unavoidably involves an element of subjective judgment. Examination of historical measurements and recommended values for the fundamental physical constants shows that the reported uncertainties have a consistent bias towards underestimating the actual errors. These findings are comparable to findings of persistent overconfidence in psychological research on the assessment of subjective probability distributions. Awareness of these biases could help in interpreting the precision of measurements, as well as provide a basis for improving the assessment of uncertainty in measurements.

Vote up Vote down Report this comment Score: 16 (16 votes cast)
I should really quit readin... (Below threshold)

February 23, 2011 2:07 AM | Posted by Anonymous Aspie: | Reply

I should really quit reading this blog, the thinking style he describes just doesn't apply to me.

@Dan Dravot: props

Gotta love the money quote at the end on why this doesn't apply to climate change. Reminds me of a postmodernist I read in an article a month or so ago desperately trying to come up with a way to claim evolution was *true*.

Vote up Vote down Report this comment Score: -2 (10 votes cast)
Some nice points Steven. Ye... (Below threshold)

February 23, 2011 3:11 AM | Posted by Anonymous: | Reply

Some nice points Steven. Yet, the point with regard to the substantive lack of the supposed Decline Effect in Physics, Chemistry and their 'intersection' fields still stands.

Bell's theorem hasn't regressed. Quantum Electrodynamics is just as precise. The Jahn-Teller Theorem hasn't vanished into the abyss, etc.

This is the age-old philosophical debate over reductionism, recast for the general public not in terms of the validity of the social-sciences, but why all of science (tacitly including them as a valid science) is failing. I'm sorry, but you're not that smooth Iceman. Whether you believe in the Special Sciences of Jerry Fodor or are a strict reductionist who believes the problem is the lack of constraints on the systems under study's degrees of freedom, it really is irrelevant to the stunt being pulled here. You can't just make something a fact by assumption.

As an aside, as someone who knows most every name on a John Davis paper, I can only hope he was rudely manipulated and misquoted with:

Many researchers began to argue that the expensive pharmaceuticals weren’t any better than first-generation antipsychotics, which have been in use since the fifties. “In fact, sometimes they now look even worse,” John Davis, a professor of psychiatry at the University of Illinois at Chicago, told me.

Quite a leap for a man whose meta-analysis of atypicals concluded with the obvious fact that they should not be viewed as and "are not a homogeneous group." Maybe you should have explained this better to Jonah.

- V

Vote up Vote down Report this comment Score: 3 (3 votes cast)
One of the fundamental prin... (Below threshold)

February 23, 2011 5:38 AM | Posted by slw: | Reply

One of the fundamental principles of good programming practices, is that you have someone else design your tests and you have the tests designed(but not revealed) before you write a single line of code. Similarly, if tests are designed for software that already exists, you never tell the test designers what the software actually does, only what it is supposed to do.

If you are fitting hypothesis to data, you are measuring baloney. If you are running an experiment and you know what you are testing for and have a direct influence on the course of the experiment, again, you are measuring baloney.

Vote up Vote down Report this comment Score: 7 (9 votes cast)
The Decline Effect provides... (Below threshold)

February 23, 2011 6:48 AM | Posted by Z. Constantine: | Reply

The Decline Effect provides us with the evidence we need to see that the "Law of Attraction" as outlined by Rhonda Byrne's best-selling book "The Secret" manifests even amongst hard-nosed scientific communities: The desire amongst contrarians to disprove the prevailing theory gradually nullifies the reality of the prevailing theory.

I think P.T. Barnum (or was it David Hannum..?!) had something to say about similar groupthink failures... but honestly TLP, would a person of average intellect and interest in the sciences know what to do with source data if he or she had it readily available?

[Pro tip: Use science for evil - it's the surest way to make a buck ... and you can't save the feckless from themselves]

Vote up Vote down Report this comment Score: 2 (8 votes cast)
You're almost at the end t... (Below threshold)

February 23, 2011 7:59 AM | Posted by syntaxfree: | Reply

You're almost at the end to your journey to find systems thinking.


Great article as usual.

Vote up Vote down Report this comment Score: 0 (2 votes cast)
<a href="http://thirdtierre... (Below threshold)

February 23, 2011 9:39 AM | Posted by Nando: | Reply

http://thirdtierreality.blogspot.com/

At least, those academics who feel that "law = science" do not need to bothered with such technicalities. After all, those of us in the real world understand that "law = politics".

Judges and legislators have typically already reached their decisions prior to a hearing. They often rely on their own biases, prejudices, and ideology. Sometimes, a judge will be upset with the person before you, and guess who will feel his wrath? I have seen judges dress down prosecutors and defense attorneys for not wearing a tie!

I recognize that legislative hearings and court proceedings are for public consumption, i.e. "See? We allowed for public input. The process is transparent." I have seen legislators and judges loft softballs to those they WANT to see win, while tripping up those they want to see lose.

I have witnessed legislative hearings where one side has solid, hard facts, crisp presentation, stellar credentials - and they still lose, even when the contra position is represented by ignoramuses and dolts. Why?! It wouldn't be due to the fact that the "lawmakers" wanted to see one side win, would it?!?!

Vote up Vote down Report this comment Score: 0 (12 votes cast)
This is slightly off-topic,... (Below threshold)

February 23, 2011 12:08 PM | Posted by OhioStater: | Reply

This is slightly off-topic, but where does the Wall Street Journal fit in the New Yorker, Atlantic, NY Times pantheon?

They scored a huge hit with their Amy Chua article and they repeated it with the Kay Hymowitz article.

Good job Rupert!

Vote up Vote down Report this comment Score: 0 (2 votes cast)
Calling the Decline Effect ... (Below threshold)

February 23, 2011 1:30 PM | Posted by Anonymous: | Reply

Calling the Decline Effect "Stupid" is pretty stupid. Is that what you do to get hits?

Especially with respect to psychoactive drugs. The drug's effect on human psyche is due to the changing nature of human psyche more than the drugs.

We have always had war and yet only recently been diagnosing PTSD. Did it not exist before? Of course it did, but no one cared enough to label it.

We care so much about PTSD in his country because we lead such sheltered lives, but does anyone talk about it in relation to Iraqi civilians? Of course not. They're tough enough to endure it.

Psychopharmaceuticals change their effect because 1. our psyches change, 2. our chemistry changes, 3. our reactivities change, and 4. our expectations change.

We need to continually change our survival strategies as the nature of what it means to be human changes.

Vote up Vote down Report this comment Score: -5 (17 votes cast)
Imagine a study of Prozac v... (Below threshold)

February 23, 2011 2:50 PM | Posted by Anonymous: | Reply

Imagine a study of Prozac vs. placebo in 10000 patients, and Prozac is awesome. Imagine two more studies, each with 6 patients, and Prozac doesn't beat placebo in those. I now have three studies. My meta-analysis concludes: "Prozac was found to be superior to placebo in only a third of studies." Boom-- Associate Professor.

Straw man. How disappointing.

Vote up Vote down Report this comment Score: 5 (15 votes cast)
So you are basically saying... (Below threshold)

February 23, 2011 4:48 PM | Posted by Anonymous: | Reply

So you are basically saying, over and over, in every blog post, that popular changing medical opinion is often the result of financial pressure on behalf of industry and their hired workers?

Duh.

I've seen such huge success with my mood problem by taking myo-inositol, acetl-l-carnitine, bright light therapy, chromium picolinate + GTF, an insulin reducing dietary practice, and lots of alpha linolenic acid + supplemental EPA/DHA.

There is a lot of research to support most of these interventions. The reserach on myoinositol shows clear reduction in depression indicators and anxiety spectrum symptoms (and no, it is not a sedating/fattening drug like remeron, so depression improvement will not be attributed to sleeping more and eating more).


I was so extremely depressed and nonfunctional at 18, now I am 10 years older and I am a totally different person, largely because I have taken responsibility for my mental condition and have been researching what causes it and how to control it. Not because I have matured in some psychological sense, but because my brain works properly now that I address the nutritional/lifestyle insults which made it work improperly. There is no amount of maturation which will make someone want to live, to feel pleasure and life means something. I cannot describe waht it is like to have this problem (depression) and then to gradually get better because you have figured out why your brain wasn't working correctly.

When I talk to other crazy people, none of them have used any of those things. It's like, their answer is to just try zoloft, paxil, effexor, xanax. So, when you noticed you were falling into depression in november, did you try bright light therapy and stick with it for a few days/weeks? Oh, you just took some wellbutrin instead and got on lamictal? Ok.

Basically what I am saying is that I have completely turned around my physical and mental state by researching the truth and not buying the sound bites.

Of course you will tell me I am rocking a nice placebo high, because you are raised on the sound bites which state that non-pharmacological interventions are inappropriate and ineffective for mental disorders. Perhaps that's true for psychotic illnesses, but most psychiatric services are not for psychosis, they are for mood and anxiety disorders which are clearly amenable (and caused by) lifestyle and nutritional insults combined with genetic factors.

Vote up Vote down Report this comment Score: 1 (13 votes cast)
This seems to be the likeli... (Below threshold)

February 23, 2011 8:24 PM | Posted by lemmy caution: | Reply

This seems to be the likeliest explanation for the decline effect:

http://scienceblogs.com/mikethemadbiologist/2011/01/a_critical_cause_of_the_declin.php

If an effect is weak, the only "significant" (thus publishable) studies with small sample sizes will be the ones that greatly overstate the effect. When later tests are done with a larger sample size, the effect will decline or go away.

Vote up Vote down Report this comment Score: 2 (4 votes cast)
"I've seen such huge succes... (Below threshold)

February 23, 2011 8:56 PM | Posted by Anonymous: | Reply

"I've seen such huge success with my mood problem by taking myo-inositol, acetl-l-carnitine, bright light therapy, chromium picolinate + GTF, an insulin reducing dietary practice, and lots of alpha linolenic acid + supplemental EPA/DHA."

Those are great recommendations---

Also-- if you're rocking a placebo high then so are likely a large portion of the people on pharmaceuticals. : )


(I'm with you buddy! I think there are evidence based studies that make the arguments for the suggestions you've given quite plausible. I'd like to see more research and further more psychiatrists take note of such research--- HINT HINT)

Vote up Vote down Report this comment Score: -1 (3 votes cast)
This seems to intersect alm... (Below threshold)

February 23, 2011 10:13 PM | Posted by Adam: | Reply

This seems to intersect almost perfectly with something I read in Ben Goldacre's blog a few years ago:

http://www.badscience.net/2008/03/all-bow-before-the-might-of-the-placebo-effect-it-is-the-coolest-strangest-thing-in-medicine/

"Firstly, a study by Daniel Moerman looked at 117 studies of ulcer drugs from 1975 to 1994 and found that the drugs may interact in a way you might not expect: culturally, rather than pharmacodynamically.
Cimetidine was one of the first anti-ulcer drugs on the market, and it is still in use today. In 1975, when it was brand new, it eradicated 80% of ulcers, on average, in various different trials. But as time passed the success rate of cimetidine – this very same drug – deteriorated to just 50%.
This deterioration seems to have occurred particularly after the introduction of ranitidine, a competing and supposedly superior drug.
There are various possible interpretations of this finding: it’s possible, of course, that it was a function of changing research protocols. But one possibility is that the older drug became less effective after new ones were brought in, because of deteriorating medical belief in it."

Vote up Vote down Report this comment Score: 1 (1 votes cast)
What happened to Alone's ol... (Below threshold)

February 23, 2011 11:42 PM | Posted by syntaxfree: | Reply

What happened to Alone's old practice of replying to some comments? It slowed down the comments section's tendency to become a miniature Craigslist, complete with people peddling their blog URLs, patients exchanging folk medicine recipes ("I treat my anxiety with glycin! It's a neurotransmitter precursor!" and lonely libertarian conspiracy theorists whose conspiracy theories haven't become popular among libertarians of mainstream conspiracy positions.

A snarky one-liner here and there is all you'd need to keep this focused and more acceptable to other major dudes on the fringes of academia. (I had to think twice before recommending this to Eric Weinstein, for one). Unless you somehow get off seeing this bazaar effect snowball from small provocations.

Vote up Vote down Report this comment Score: 4 (6 votes cast)
PTSD was called "shell shoc... (Below threshold)

February 24, 2011 7:43 AM | Posted, in reply to Anonymous's comment, by Anonymous: | Reply

PTSD was called "shell shock" after the first and second world wars.

The reason we don't read about Iraqi civilians experiencing PTSD isn't because they don't experience it, it's because it would make bombing civilians seem as horrible as it actually is (as opposed to war being just like a video game).

Vote up Vote down Report this comment Score: 10 (12 votes cast)
@Anonymous 4:48. It's good ... (Below threshold)

February 24, 2011 8:32 AM | Posted, in reply to Anonymous's comment, by Anonymous: | Reply

@Anonymous 4:48. It's good to hear that you've found these things helpful for your depression. It's fairly clear that nutrition and lifestyle affect our brain functions a great deal. But. I think you aren't giving mental (professional and self-)help (which I assume you mean by psychological maturation) enough credit. Although I have to admit the list of things that helped you sound rather daunting (can't help but wonder that if it required all THAT, what chances does someone struggling with depression AND limited resources have... but that's just the cynical side of me) it does seem plausible that they are bound to be beneficial for some people. Naturally, the correct nutrition and lifestyle choices benefit everyone, but they are not universal and can vary a great deal depending on the person in question. As can, of course, the effects of say, talk therapy, mental reframing, even self-suggestion or religion in helping one with depression. I'm glad focusing on the physical side helped you, but for others, that simply isn't enough. We are not only machines, guaranteed to work with the right oiling and fuel.
I personally tried, at the suggestion of several different doctors at different times over several years, a combination of several different meds in different quantities that sometimes caused insomnia, hypersomnia, nausea, accelerated heart-rate, shivers, sweating, blurred vision and a bunch of other more or less nasty side-effects, but never really helped my mood per se. Sometimes, with the help of the meds, I was able to work but was emotionally either drained and hollow or hypersensitive and anxious. It was a curiously dreadful feeling, being able to function on a level that I was of use to the economy and a contributing citizen, but was completely out of touch with my loved ones and the things, besides work, that had been important to me before the depression. I felt like a proletary zombie, doing my duties with a frozen smile and placid acceptance, but somewhere deep inside, I was screaming.
It couldn't last. While under medication that gave me more restless (and economically useful) energy than I'd had in years but that curiously had the possible side-effect of causing self-destructive behavior, I tried to kill myself.
Recovering, I decided to get off meds. That was the start of a long journey where I explored and found helpful the bright light therapy you mentioned as well as regular exercise and helpful diet choices, plus vitamin D in slightly larger than recommended quantities (suppose this could be called a drug, but hey. I live in a northern region where natural vitamin D reception can be exceedingly low at times, so nowadays my husband and I jestingly refer to the D as my sun-pills.) However, these changes alone weren't enough, but they did get me to a point where I had the energy and motivation to attend intensive dynamic talk therapy. With the help of my therapist I was able to come to terms with some hurtful things from my past as well as work, and I mean really work, on a more positive attitude and outlook in life. Mental reframing became a large part of this, and I do not believe I could be where I am today without some of what you call psychological maturation. While depression can be caused by largely, even purely physical reasons, that was not the case with me and I'm sure there are plenty of others out there whom this is true for, as well, and thus fixing only the physical side will never be enough for them to heal. It can be the thing that helps you start down that road, though. And I wouldn't tell anyone they are rocking a placebo high. What helps you, helps you and that's all good.
Oh well, guess I just needed to put that out there. Nothing like an anonymous wall of text (off topic, as well!) to get things off your chest. Bare with my (probably) narcissistic endeavor or feel free to ignore it. All the best. -L

Vote up Vote down Report this comment Score: 7 (7 votes cast)
I am a psychologist, and am... (Below threshold)

February 24, 2011 9:11 AM | Posted by medsvstherapy: | Reply

I am a psychologist, and am soundly a scientist. Physicians and journalists have a hard time grasping this because these two camps somehow believe that physicians, trained in a model that is a mix of the apprentice model and what might be called natural science-based medicine, are scientists.

At least Alone appreciates the difference.

I put up a comment about psychology being a science at Bad Science, when some other person decided to make the comment that, because psychology results can be so variable and challenging, that psychology is not a science.

I'll look it up.

If you claim to be a scientist, but don't actually functionally work as a scientist, you are simply deceiving someone - yourself or someone else.

If you claim you are a home inspector, and you go to a home, look aruond a bit, and make checks on a form, and sign it, you functionally have not been a home inspector. Regardless of whether you are certified or not.

The coin toss issue is not regression to the mean. You need some measurement error to get regression to the mean. some study subjects happen to score one way, and the error mixed up with the true measure happens to accent that direction a bit more. Well, next time around, the error will not coincide in the same direction the same way. This is a great reason why "modeling" should include validation on a similar, but separate data set; there, it is called "capitalization on chance."

The "hard sciences" aren't as reliable as they tried to convince those of us who are older. (Now, "science" in school is basically is a set of topics related to natural science intended to make kids believe that the globe is in imminent peril from anthropogenic global warming, over-population, and other nonsense - including, I will guess, whatever nutrition message Michelle Obama wants to be true).

The table of the elements just got changed. This made the news, but all of us who believe in this law-driven certainty of "physics" ignored the implications, or have not realized the implications.

The atomic weights have been given what are basically confidence intervals.

what did they discover recently to change this? Nothing. They have had the data for years and years.

For years, school kids have sat under that periodic table of the elements feeling safe in the reassurance of the certainty of "science" (natural science). All of those charts were wrong, and the physicists knew it. Yet, they continued to allow this impression of "hard science" ride on, and they left the periodic tables of elements up on the wall, with those firm atomic weights.

http://www.sciencedaily.com/releases/2010/12/101215133325.htm

We scientists who study human behavior may not be able to predict or explain why some combat vets, with similar experiences, will get PTSD and why others won't. If we investigate this topic through recognized scientific strategies, and by the value/belief system of science (science is a belief system established on a set of concepts that are taken, as a "given," to be true based on the fact that they are reasonable), we are being scientists investigating a difficult question at orders of magnitude more complicated than being able to predict a trajectory of a satellite projectile.

Vote up Vote down Report this comment Score: -1 (9 votes cast)
...the article predictab... (Below threshold)

February 24, 2011 9:57 AM | Posted by fraise: | Reply

...the article predictably focuses on the psych drugs that we hate to love to take, that keep the McMansions heated and the au peres blondily Russian.

Best. Lapsus. Ever.

père = father
au père = to the father
pair = equal (in this context), from the same Latin "par" of our English "par"
au pair = literally means "on an equal level; on a par", nanny-slash-domestic assistant
http://en.wikipedia.org/wiki/Au_pair

Is your unconscious implying that au pairs are aux pères? ;-)

Vote up Vote down Report this comment Score: 3 (5 votes cast)
"imipramine doubles the ... (Below threshold)

February 24, 2011 2:28 PM | Posted by HMMM...: | Reply

"imipramine doubles the mania rate, Depakote does nothing to it, and Zoloft lowers it"

Data please? Especially URLs I can get at without an expensive subscription to expensive journals?

Vote up Vote down Report this comment Score: 2 (2 votes cast)
The good part: "The problem... (Below threshold)

February 24, 2011 2:29 PM | Posted by One Point: | Reply

The good part: "The problem isn't that the Decline Effect happens in science; the problem is that we think psychology and ecology and economics are sciences."

The bad part: you kept typing.

Vote up Vote down Report this comment Score: 0 (8 votes cast)
"imipramine doubles the ... (Below threshold)

February 24, 2011 2:29 PM | Posted by HMMM...: | Reply

"imipramine doubles the mania rate, Depakote does nothing to it, and Zoloft lowers it"

Data please? Especially URLs I can get at without an expensive subscription to expensive journals?

Vote up Vote down Report this comment Score: 0 (0 votes cast)
medsvstherapy: I put up ... (Below threshold)

February 24, 2011 2:52 PM | Posted by Anonymous: | Reply

medsvstherapy: I put up a comment about psychology being a science at Bad Science, when some other person decided to make the comment that, because psychology results can be so variable and challenging, that psychology is not a science.

While there's no need to be disparaging about it, there is a difference in the way 'science' is carried out in these fields.

For example, has quetiapine's binding affinity at D2 declined to unobservable over the last decade? Of course not. I can run 10,000 ITC runs and I'll approximate the same number which is determined based off the underlying physical properties.

Furthermore, I can run 10,000 PET scans and observe the same occupancy of quetiapine at D2R.

Yet, somewhere between chemistry and neurobiology (which have consistent results) and psychiatry is a gulf upon which these results are being lost. It's like the one-way membrane of a blackhole: we put information in and some jibberish crap comes out the other-side that may or may not retain the information.

A clue as to where the problem resides is the fact that, as I previously said above, in the former sciences you are able to constrain the system size to just the variables you want to measure. You can preform classical science on them.

The psychiatry and natural science's are working with minimalist models, not because it's a superior way to do science (it's not), but because their systems under study have so many degrees-of-freedom -- they are are so unwieldy and over-their-heads in terms of complexity and the number of variables -- that they can only work with these small toy models.

And the problem is their 'science' is only valid under the tacit assumption that their model has captured all the fundamental dynamics of the system -- quite a tall order for such a minimalist representation. Especially as it's been demonstrated in the area of computability theory and complex system analysis that some systems are not compactable and may never be prestatable.

Psychiatry's current problem will fade as the neuroscientists' and physicists' bail you out with better diagnostic tools in the next decade or so. Yet, the fundamental problem of philosophy of science stands.

- V

PS. The periodic table change has nothing to do with this. It's a reflection of providing better accuracy due to physics being able to more precisely measure coupling constants such as alpha (fine-structure) as it converges on a definite value and then pass that accuracy on to different fields where the elemental composition is heterogeneous. It would be a problem if as as we re-sample alpha, the value regressed to nothing.

Vote up Vote down Report this comment Score: 5 (5 votes cast)
That was a lot of section 5... (Below threshold)

February 25, 2011 4:43 AM | Posted by Anonymous: | Reply

That was a lot of section 5s.

Vote up Vote down Report this comment Score: 1 (1 votes cast)
Got any good links on syste... (Below threshold)

February 25, 2011 10:33 PM | Posted, in reply to Anonymous's comment, by Anonymous: | Reply

Got any good links on system compactability (compactibility?)? I wouldn't even know how to start searching for that.

Vote up Vote down Report this comment Score: 0 (0 votes cast)
You write: "This phenomenon... (Below threshold)

February 26, 2011 12:14 PM | Posted by Wittewijven: | Reply

You write: "This phenomenon isn't occurring in physics." But a simple glance at the article you link says:

The same holds for any number of phenomena, from the disappearing benefits of second-generation antipsychotics to the weak coupling ratio exhibited by decaying neutrons, which appears to have fallen by more than ten standard deviations between 1969 and 2001. Even the law of gravity hasn’t always been perfect at predicting real-world phenomena. (In one test, physicists measuring gravity by means of deep boreholes in the Nevada desert found a two-and-a-half-per-cent discrepancy between the theoretical predictions and the actual data.)

...which suggests that you didn't really read the article any closer than you needed to justify going off on yet another rant about how everyone in the world is drowning in confirmation bias except for you. Although I suppose given your normally decently clear thinking one could charitably assume that you were deliberately writing from the obviously absurd and unsupported position in order to really drive home to your readers the need to ask themselves what the author wants to be true. So what do you want to be true?

Well, clearly you don't want to have to consider the possibility the article presents, that reality is not so reliable and consistent as the stories we tell ourselves about it. And apparently you also want there to be a way for people to get at that absolutely consistent reality, and you want to be able to draw the line as to what is and is not the right way to do that. Which makes your post essentially a propaganda puff piece for - not science, but - SCIENCE!, the new religion and cultural rationale for claimed rightness and absolutism.

Ugh. You're usually so much better than this.

Vote up Vote down Report this comment Score: 1 (13 votes cast)
The fact that the law of gr... (Below threshold)

February 26, 2011 4:12 PM | Posted, in reply to Wittewijven's comment, by The Devastator: | Reply

The fact that the law of gravity is constant is verified much more precisely that 2% by the orbits of satellites and the moon around Earth, and the other planets around the sun. The lengths of the month and year, for example, have followed theoretical predictions with high precision since people have been able to take accurate measurements.

In other words, tide goes in, tide goes out. Never a miscommunication. You can't explain that.

I'm sorry, I couldn't resist. To continue:

I would guess that an experiment measuring gravity by dropping objects down deep boreholes is going to have large error bars, not because gravity is variable, but because of air resistance! Air resistance is well understood of course, but it depends in complicated ways on the mass and shape of the falling object. I'm sure the physicists in this experiment worked hard to model air resistance correctly, but no model of something as complicated as air resistance can be perfect.

Vote up Vote down Report this comment Score: 6 (6 votes cast)
Got any good links on sy... (Below threshold)

February 26, 2011 8:58 PM | Posted, in reply to Anonymous's comment, by Anonymous: | Reply

Got any good links on system compactability (compactibility?)? I wouldn't even know how to start searching for that.

I'd suppose the best introduction might be Ed Lorenz's The Essence of Chaos. It's as entry-level as I've seen for the field (which scales up very fast) and is written by a pioneer. I should try and find my copy, it's around here somewhere...

- V

Vote up Vote down Report this comment Score: 0 (0 votes cast)
The medium is the message?<... (Below threshold)

February 26, 2011 9:00 PM | Posted by Jack Coupal: | Reply

The medium is the message?

Vote up Vote down Report this comment Score: 0 (0 votes cast)
The problem isn't that t... (Below threshold)

February 27, 2011 11:14 AM | Posted by Jack Coupal: | Reply

The problem isn't that the Decline Effect happens in science; the problem is that we think psychology and ecology and economics are sciences.

You forgot to add "political science" to that list of pseudosciences.

Vote up Vote down Report this comment Score: -1 (1 votes cast)
The fact that the ... (Below threshold)

February 27, 2011 11:33 AM | Posted, in reply to The Devastator's comment, by Wittewijven: | Reply

The fact that the law of gravity is constant...

a. has nothing to do with Alone's mis-characterization of what the article said, and
b. is most likely an assumption on your part, an amalgam of at least two other assumptions: that variation in gravity would have been noticed by now if it had been occurring, and that what appears constant is constant rather than a statistical average that ignores uncommon outliers.

For an example of how physical (in)constancy can be hard to determine, have a look at some of the arguments over the fine-structure constant.

I would guess [...] air resistance!

Personally I would guess that I didn't know enough about the experiment(s) in question to cast judgments upon them, and I certainly wouldn't be assuming as an outside party that difficulties with air resistance had not been taken into account by professional physicists.

You're making the same mistake Alone did, Devastator. You have a picture in your head of how you believe the world must be, and you're rationalizing away any challenges to that picture as incorrect based not on knowledge of the experiments those challenges are founded upon but upon the fact that they conflict with your preconceptions of reality.

Vote up Vote down Report this comment Score: 0 (2 votes cast)
Okay, I think I found the p... (Below threshold)

February 27, 2011 2:03 PM | Posted, in reply to Wittewijven's comment, by The Devastator: | Reply

Okay, I think I found the paper you're talking about: "Testing the inverse square law in boreholes at the Nevada Test Site." Link: http://authors.library.caltech.edu/6356/

Apparently the researchers' most important source of systematic error was not air resistance, but the uncertainty in the density of the Earth deep inside the borehole. So I was wrong there. But I was correct in that the researchers did not find an experimental deviation from the law of gravity larger than their systematic errors, so it seems the borehole experiments do not provide any evidence against Newtonian gravity.

Again, I refer you to orbital mechanics. Orbits of planets, the moon, GPS and communication satellites, space probes, and other spacecraft, are precisely calculated using Newtonian gravity. Orbital data agrees with theory to great precision.

The Wikipedia article on the Fine Structure Constant was interesting. I think it is quite possible that fundamental constants are evolving with time -- maybe they are linked in ways we don't understand to the dark energy density or the size of the universe? Note the scales involved, though. The experiments cited in the article claim that the FSC may have changed by a factor of 10^-6 over cosmological timescales. (This is not certain though -- the measurements are at the edge of what is possible). I am ready to entertain the possibility that the gravitational constant is evolving on a similar scale. But a few percent over historical timescales? No way. We would have noticed it.

Not to go all Sagan on you, but extraordinary claims require extraordinary evidence. Do you have extraordinary evidence?

Vote up Vote down Report this comment Score: 2 (2 votes cast)
So, to try and understand t... (Below threshold)

February 28, 2011 11:39 AM | Posted by vv111y: | Reply

So, to try and understand this better, is this guy a narcissist?

http://www.jamesaltucher.com/2011/02/10-confessions/

I thought this was refreshing to see and then I thought about what you have been saying. But I really can't tell off-hand whether he is or isn't. What's the criteria to use?

Vote up Vote down Report this comment Score: 0 (0 votes cast)
V sez: "Again, I refer you ... (Below threshold)

February 28, 2011 11:45 AM | Posted by medsvstherapy: | Reply

V sez: "Again, I refer you to orbital mechanics. Orbits of planets, the moon, GPS and communication satellites, space probes, and other spacecraft, are precisely calculated using Newtonian gravity. Orbital data agrees with theory to great precision."

A bunch of the same slippery language you here in cultural anthropology and the other areas of marxist study masquerading as "science:" if celestial mechanics are so "precisely" calculated, why throw in fuzzy language such as "agree with great precision?"

Does "agree with great precision" suggest that there is error?

i thought there was no error in predicting orbits. I thought "the science was settled."

Alone's predictions of recidivism (a science-based prediction of human behavior) may "agree with great precision," as long as the range of "great precision" is sufficiently broad.

Vote up Vote down Report this comment Score: -4 (4 votes cast)
There is always error in an... (Below threshold)

February 28, 2011 4:03 PM | Posted, in reply to medsvstherapy's comment, by The Devastator: | Reply

There is always error in any scientific measurement, which is why it is correct to say a physical number is known "to great precision" and not "exactly." The method you use to track a satellite is always imprecise because instruments aren't perfect. There are also physical perturbations to orbits themselves -- a satellite orbiting Earth moves almost as if Earth is alone in the universe, but there are small perturbations from the gravity from the sun and moon. These can be taken into account, of course, but then there are errors in the measurements of the masses and positions of the moon and sun, etc.

Nevertheless, orbital mechanics is, for all practical purposes, exact, especially compared with measurements we are used to in everyday life. As an example, read about the Pioneer Anomaly: http://en.wikipedia.org/wiki/Pioneer_anomoly. The anomalous acceleration might be due to genuine new physics, such as a correction to gravity. It might also be due to a real effect within known physics, such as drag from the interstellar medium. Or it might be due to a problem with the spacecraft itself, such as recoil from evaporating paint. A fourth possibility is tracking errors, or even errors due to the different ways that numbers have been recorded in computers over the decades.

The relevant point here is that the anomaly amounts to an unexplained acceleration about 10 billion times weaker than gravity on Earth. This acceleration was noticed, and is the subject of intense scientific interest. So whether or not the familiar law of gravity breaks down over very large time and distance scales, it is a fantastically good approximation to the data.

Does that make sense?

Vote up Vote down Report this comment Score: 4 (4 votes cast)
To be fair, replication iss... (Below threshold)

March 2, 2011 12:58 PM | Posted by Anonymous: | Reply

To be fair, replication issues can arise in any area of science where data is not closely scrutinized or expected to be replicated--and this includes at least some areas of hard science, among them my own field, inorganic chemistry. I'm guessing some areas of physics are no better.

In my specific case, while we aren't dealing with evolving populations or changing mindsets, the motions of collections of atoms are subject to more influences than we generally take into account, and are also harder to measure conclusively that we take into account. And we are not expected by peer reviewers to repeat experiments enough for statistical significance. Even worse, because this work isn't directly relevant to industry, it rarely gets replicated or looked at closely, making it possible to publish overinterpreted data. Accordingly, if anyone cared enough to read the results and spread them as accepted wisdom, we would have just as much trouble with "facts" losing their truth as any softer science. The only thing that saves faces is that nobody expects the results of our experiments to generalize to other similar systems, and generally only one person works on a given system, allowing them to catch their own errors--or not.

Granted I'm a disillusioned grad student, but most scientists in chemistry seem to agree that the peer review process is broken. Bad science is published in good journals, and by respected scientists, reviewers routinely miss the point of the papers they review... So don't pretend that some fields are solid and true, while others are squishy--the scientific method can be applied well or poorly to any area, and generally what makes the facts of one field actually true is that someone tried to use them for something, and complained if they didn't work.

Vote up Vote down Report this comment Score: 4 (4 votes cast)
Any chance automation & "ma... (Below threshold)

March 2, 2011 1:12 PM | Posted, in reply to Anonymous's comment, by vv111y: | Reply

Any chance automation & "mass-production" type science would help?

What about incentives - throwing more money at science?

Open Access - better, worse, or makes no difference?

Vote up Vote down Report this comment Score: 0 (0 votes cast)
I work in science and somet... (Below threshold)

March 2, 2011 9:13 PM | Posted by Anonymous: | Reply

I work in science and sometimes I run a bit of code to 'navigate' towards an answer and I hit something really exciting right at 5:30 pm and I have to go home at 6:30 pm. The next morning, it looks completely different and I throw it out. And then I find it again later in the week and I find myself sifting through the code line by line to figure out where my thinking went wrong or where the code went wrong. I tell myself 'I'm a scientist' but sometime there in the office with just myself and the computer, I feel like I'm in the twilight zone.

Sometimes I take a deep breaths for hours and really, really study and own every line of that code. I build a .. true edifice of Maxwell's equations .. but on some days, the code isn't going to work right until I reboot Matlab.

We've got unreliable wetware and the software isn't so reliable either. We hope the hardware is. Whatever is out there screwing with my brain better not mess with my Ipod, the launch platform or the Scientific Method.

Vote up Vote down Report this comment Score: 3 (3 votes cast)
Anon 12:58 FTW: "So don't p... (Below threshold)

March 3, 2011 8:37 AM | Posted by medsvstherapy: | Reply

Anon 12:58 FTW: "So don't pretend that some fields are solid and true, while others are squishy--the scientific method can be applied well or poorly to any area."

Vote up Vote down Report this comment Score: 0 (0 votes cast)
The gem in this post is the... (Below threshold)

March 6, 2011 7:34 PM | Posted by Hans Gerwitz: | Reply

The gem in this post is the use of epsilon as a defense, to "calibrate" our models to fit observations. This may arise in any research area, but to me, at least, it often seems economics and other behavioral sciences are erected primarily on ethereal foundations of epsilons.

Vote up Vote down Report this comment Score: 0 (0 votes cast)
really study and own every ... (Below threshold)

March 6, 2011 10:43 PM | Posted by Monster Energy Hats: | Reply

really study and own every line of that code. I build a .. true edifice of Maxwell's equations .. but on some days, the code isn't going to work right until I reboot Matlab.Fox Hats

Vote up Vote down Report this comment Score: 0 (0 votes cast)
excellent!... (Below threshold)

April 10, 2011 10:58 AM | Posted by cristina: | Reply

excellent!

Vote up Vote down Report this comment Score: 0 (0 votes cast)
The problem is not with the... (Below threshold)

September 7, 2011 9:06 PM | Posted by jlw: | Reply

The problem is not with the scientific method. Further, the explanation of 'regression to the mean' is correct, but 'statistical anomaly' is not.

The problem is the original author's interpretation of statistics, combined with a flaw with everyone's application of Fisher's p-value as risk for getting it wrong (which Fisher proposed). The chance of a wrong positive inference is not, say, something like one in a million. It's more like, 5 in 100 (i.e., alpha = 0.05 by 'convention').

Fisher wanted researchers to share the risk of a Type I error ACROSS unrelated studies. His formulation of the use of the p-value in this way commits 5% of all studies to false discoveries if the acceptable Type I error rate of 5%. Add to this that these days there are something like >20,000 studies done in the world each year, that is a lot of false discoveries. The solution is that individual studies should siloize their false discovery risk TO THEIR OWN STUDY by (1) doing phased research (Phase 1 = discovery, Phase 2 = validation) to study how well the initial findings generalize; (2) use statistical methods tailored to the individual study (i.e., actually consider the distribution of the data instead of make assumptions about the distributions, hope the test is (ahem) 'robust', and pull the trigger of doing the test even when the distributional assumptions of the data are not met; (3) use permutation statistics instead of parametric statistics; and (4) power their studies appropriately. Add to this that too few researchers check their study design for obvious confounders (e.g., pancreatic cancer tissue from old men vs. normal pancreatic tissue from 20-yr old motorcycle accident organ donors), you're going to see a lot of 'findings' not hold up over time.

It's time for statistics to grow up to meet the real demands of rampant modern valid scientific research.

Vote up Vote down Report this comment Score: 1 (1 votes cast)
Dear TLP, don't let these h... (Below threshold)

November 18, 2011 10:39 PM | Posted by Anonymous: | Reply

Dear TLP, don't let these haters get you down.. If they don't like what you have to say, then they don't have to read your articles or let the world know, those are their issues & views. Keep going strong, you've opened my mind up to think in ways I never thought possible. (y)

Vote up Vote down Report this comment Score: 0 (0 votes cast)
I think I love you!... (Below threshold)

March 29, 2012 5:11 AM | Posted by Anonymous: | Reply

I think I love you!

Vote up Vote down Report this comment Score: 0 (0 votes cast)
What is stupid is this arti... (Below threshold)

July 1, 2012 6:44 PM | Posted by Guest: | Reply

What is stupid is this article.

Vote up Vote down Report this comment Score: -1 (1 votes cast)
The only problem with this ... (Below threshold)

August 10, 2012 1:52 AM | Posted by Hoplite: | Reply

The only problem with this article is the end. Lehrer is neither good, nor earnest, nor was he ever a scientist. And it shows. Otherwise great article. Don't let the haters get you down.

Vote up Vote down Report this comment Score: 1 (1 votes cast)
I never looked in detail at... (Below threshold)

August 17, 2012 3:55 AM | Posted by isomorphismes: | Reply

I never looked in detail at what Schooler regards as "passing all the statistical tests" but given the complaints statisticians make about psychologists and ecologists (e.g., "magic p-value", never heard of cross-validation and it wouldn't get a pub if you did it) I'm inclined to agree with you: the most likely explanation is bad statistics.

Next up, is Nathan Nunn a statistical hoodwinker or properly sweeps the floor with humility at century-long ceteris paribus "statistical controls"?

Vote up Vote down Report this comment Score: 0 (0 votes cast)
I assume you're just being ... (Below threshold)

August 17, 2012 4:23 AM | Posted by isomorphismes: | Reply

I assume you're just being colourful with the epsilons and correlation/causation and binary distinction of science / not a science.

John D Cook's google plus page (as well as some of his twitters like @StatFact and his blog) is a professional-level resource for irregular updates from the land of awful statistics in science. A lot of professional statisticians leave their gripes with bad research practice in his space.

Vote up Vote down Report this comment Score: 0 (0 votes cast)
The unfortunate thing is th... (Below threshold)

December 20, 2012 10:54 PM | Posted by PJ: | Reply

The unfortunate thing is that
a) decline effect is seen fairly often, but
b) nobody seems to want to follow the trail.

It's always assumptions, like: It was never really correct and now we're seeing the truth. But it seems to me that often it is "data" and it is telling us that we're missing some confounding factor.

As for research which uses human beings, such as ESP research, ignoring for a moment the woo factor, there are a gazillion things that can affect the performance of human beings, and the measure of every trial is an 'experience' which, both in and out of context, may change the human, and hence their performance. Even in biology this is likely so, but with far less grand a range than anything psyche-related offers.

So to me when I see the term 'decline effect' I'd expect that could only be fairly used on something as close to hard science as possible. Because everything else, we either (a) don't know enough about baseline or consistency to measure it properly, or (b) the 'seeming' decline effect is probably an adaptation or habituation effect or after/side-effect-by-proxy in some fashion.


Vote up Vote down Report this comment Score: -1 (1 votes cast)
AddonHosting.com provide th... (Below threshold)

May 18, 2013 8:49 PM | Posted by Dann: | Reply

AddonHosting.com provide the best warez hosting with 99'9% server uptime, Datacenter located in Amsterdam, The Netherlands.

Vote up Vote down Report this comment Score: 0 (0 votes cast)
Funny reading this again, n... (Below threshold)

June 10, 2013 1:31 PM | Posted by trail: | Reply

Funny reading this again, now that we know Jonah probably made it all up.

No wonder it didn't make sense.

Vote up Vote down Report this comment Score: 2 (2 votes cast)

Post a Comment


Live Comment Preview

July 24, 2014 15:08 PM | Posted by Anonymous: