January 26, 2010
The Massacre Of The Unicorns II
if it had a horn, I'm sure it would be a rhinoceros
There's a debate of sorts in psychiatry: to what extent should we rely on evidenced based medicine?
It's almost a trivial point-- we're going to rely on it anyway, why debate it? So the question should better be phrased, "to what extent should the future of psychiatry rely on evidence generated now?"
In a series of articles Nassir Ghaemi tries to justify Evidence Based Medicine in psychiatry; specifically, the primacy of evidence over theories or models.
Ghaemi's says clinical realities are more important than theories, and EBM allows for the study of clinical realities. Any deficiencies in the evidence-- confounding bias, diagnostic uncertainties, etc, are really a problem about the application of the studies, and not about the possibility of EBM in psychiatry. In other words, psychiatry is sound, but we need more and better data.
Ghaemi is arguing for an empiricist's approach, as opposed to a top-down theoretical approach-- one that starts with a theory, with concepts, and either ignores evidence or bends the evidence to conform to an existing theory. (His eg: psychoanalysis.)
II.
He asserts that the foundation is clinical observation, which is then studied further using scientific methods. For example, hormone replacement therapy done on thousands, later determined to be ineffective if not harmful. See? More evidence, better practice.
A moment's reflection shows this argument to be illogical. Hormone replacement therapy did work. It had great risks, but to say that it was a failure is wrong. "It was the cure for many female illnesses, but..." So it was adequately tested in all of them, indicating its futility?
Ghaemi would respond that we would need more studies to determine the efficacy and risks in each indication, in each population. That would be right, but that's not what happened: doctors generalized the failure of a medication based on the outcomes in a restricted symptom set.
"Not better than placebo" is another false start. If a medication and a placebo both show a 25% response rate, it doesn't mean the drug is "no better than placebo": what if two different 25%s responded? Would the group that "responded" to the drug also have responded if crossed over to placebo?
The same is true for symptoms: if placebo and drug both result in a 25% reduction in symptom severity, it neither means the drug is a failure, nor, indeed, that the placebo is a placebo.
"Well, we'd need more and better studies." Of course. I'll wait at the bar.
III. Pay No Attention To The....
This story is apocryphal, so consider it a parable:
Pierre Eymard and friends were studying novel antieplieptics, and used dipropylacetic acid (a solvent) as both an intravenous vehicle for the drug, and as placebo. They observed that the placebo worked, too, preventing seizures at the higher concentrations.
If this had been a phenotype without visible effects, it would have been a perfectly ordinary conclusion that drug was not better than dipropylacetic acid-- aka Depakote.
"But placebos nowadays are inert." Is the fluorescent lighting in the office a placebo-- maybe it makes the anxious depressed patients more anxious? "Come on, those studies are from 1990." I guess that means the question had been satisfactorily resolved, requiring no further investigation?
"Well, more studies are needed..." Tell the bartender I take my rum straight.
IV. Improvement In Depression
Take the simple example of depression, as measured by the popular Hamilton Scale. The scale measures insomnia and weight loss, but not hypersomnia and weight gain. Using this scale, a patient who sleeps too much and eats too much is less depressed than someone who sleeps too little and has lost weight. And, any drug that fixes sleep and makes you gain weight has an advantage over drugs that don't. In fact, a third to half of the improvement on the Hamilton could be accomplished by improved sleep and appetite alone. Go Zyprexa.
Note that the results of drug trials are reported only as total scores; you have no idea what symptoms the drug is fixing, or not. "But it's not powered to detect those effects." Ok, but it isn't designed to tell you if it's an "antidepressant," either; only if it lowers scores on the Hamilton in this single sample group.
"We need more studies, more scales." But in the meantime we're left with "X is an effective antidepressant."
The standard academic line is that the evidence indicates all antidepressants are generally equally efficacious. Think about this. Have you ever met a single patient for whom that was true?
For a hundred reasons, none of that data applies to the patient sitting in front of you, yet it is the best information you have to go on. You have nothing else. Ok, go. The problem is not in the application of evidence to your patients, the problem is in the application of the theory that the evidence is creating in you onto your patient.
The Tohen data may show that Zyprexa is efficacious in depression, but when you prescribe it you are thinking, "antipsychotics in general are efficacious in depression in general, and I need a sedating one." You are doomed.
"But more studies are needed..." I look forward to reading them, or passing out, whichever comes first.
V.
That's the issue. In order for this to be a science, there has to be a testable hypothesis. There isn't any of that in psychiatry.
Example: antidepressant induced mania is the kind of testable question amenable to scientific investigation. Do they cause it, or not? But it's not easily answered, indeed, it cannot be answered. Which antidepressant? What's an "antidepressant?" Cymbalta, Pamelor, or Seroquel? Or CBT? What about semen? Which symptom of depression is it treating or not treating that allows you to call it an "antidepressant"? You could do a billion studies on every drug ever made, in every description of "depression" imaginable and that would only allow you to say, "ah, I know the answer in a billion specific situations" but would still have no insights into the nature of the phenomenon.
Why don't all antidepressants cause it? "Well, there are exceptions to the rule." You've been infected: the rule is meaningless.
When you give someone Paxil, you are playing the odds: this worked in 25% of the guys we gave it to in 1997. There's nothing wrong with doing this, that's what you're supposed to do; but it does not allow you to speculate on either the nature "antidepressants" or "depression."
Simply put, the problem with "Evidenced Based Medicine" isn't the evidence, but the "based." Existing evidence can guide practice, but cannot be used to create a general practice model. "Mood stabilizers are the cornerstone of treatment in bipolar disorder." While I have no idea what you're talking about, I'm certain to be punished if I don't oblige.
In physics, such empty theories don't hurt anyone, and there's value in the theory itself. String theory may turn out to be wrong, but you at least are going to be really good at math. Okay.
But in psychiatry these empty questions and empty answers are still applied to social concepts as if they carried the weight of scientific validity. The question of "antidepressant induced mania" may be empty, but that doesn't stop the legal system from using it. You can't imagine the defense proposing that at the precise moment of the murder, the universe split into two equivalent eigenstates and the defendant, in this eigenstate, had been already determined to have had to have been committing the act of murder, which he already had even before he started; but that explanation carries considerably more scientific merit than the psychiatric one, by which I mean both have absolutely none at all. Wovon man nicht sprechen kann...
VI. Here There Must Be Dragons
"But you're not really arguing against the primacy of empirical evidence, you're arguing about the misapplication of that evidence. You're arguing against incorrect generalizations, against lumping data sets together to invent a clinical model."
No, it's much worse than that.
The problem isn't that the data is sound, but we shouldn't hastily extrapolate or generalize from that evidence. The problem is it is impossible not to do this.
The first reason is because of the use of words. "I met a blonde girl last night." Oh really? he replied knowingly. The words "depression" or "bipolar" or "antidepressant" all existed before we started using them. "Bi" and "anti" and "relapse vs. recurrence" all have connotations that may have no relevance to the way they are used now, yet those connotations will inevitably surface. It seems as though "evidence based medicine" has discovered that the antipsychotic Seroquel is also good for depression, but that's not science, it's an accident of history: 15 years ago the molecule could have been tested for depression, only to now be approved for psychosis. The evidence, the science, may be neutral on the drug's identity, but it will never be equivalent to an SSRI in your mind. In order for it to be successfully rebranded, everyone who learned it the other way has to die.
Second, the explicit purpose of psychiatry is to apply the discoveries immediately. The hasty generalizations and applications aren't a byproduct of the field, they're the whole point. We don't have time to wait for a physiological explanation for bipolar, we have to get people better now. But while extrapolating from "kindling theory" or one antiepileptic's mania data to a theory of "mood stabilization" is a noble attempt, it's still wrong.
Third, our brains have no alternative but to assume causality. No matter how many times you say "X is associated with Y" we will think "X causes Y." Academics like to point this mistake out when residents do it, but everyone is guilty of it, all the time. This isn't a criticism of human laziness, this is how we're designed. Our brains can't help it, they do not allow for a vacuum, they force causality. The brain may not let it become conscious, but you'll act like it, breathe like it. Even when you know it's wrong. I know how a mobile phone works, yet I still yell louder when it starts breaking up. The only way to stop assuming one explanation is to be given another explanation.
Fourth, while 1 + (-1) = 0, a positive study is never completely refuted by a negative one-- and vise versa. Even if studies are of identical design in the exact same patients, the marketing of a study-- who wrote it, where it was published, how many "thought leaders" got behind it, how many pages, tables-- all of this supersedes the content. Even if you successfully appraise a single study on its merits, the rest of the vastness of psychiatric literature is available to you only by rumor. When the fashion turns away from SSRIs and North Face jackets, you'll frown when they occasionally reappear.
Fifth, simply asking the question often overwhelms the evidence. If you ask, "does Geodon cause QTc prolongation?" it immediately stops mattering whether the evidence shows conclusively that it doesn't, or that it was a mistake; it even stops mattering whether you even understand what "QTc prolongation" means. The moment the question is asked, you are forever condemned to pause before prescribing Geodon.
VII.
I've avoided discussions about groupthink or specific biases in studies as they are incidental to the fundamental problem of psychiatry, which is a faith in the primacy of evidence in the absence of any interest in a theory of mind. Evidence can, should, and does inform practice, and none of its shortcomings should change the way we use it today. But faith in evidence hasn't moved psychiatry forward at all in 50 years. More evidence will not fix this, because there's nothing guiding the evidence.
The unfortunate truth is that most of the evidence in "evidenced based medicine" is at best too limited for general application, at worst wrong. Many of you will reflexively recoil from this, retreating from the vertigo to the crowded safety of your peers, journals, and false idols, but this empiricism is only another kind of apostasy. Repent.
It's almost a trivial point-- we're going to rely on it anyway, why debate it? So the question should better be phrased, "to what extent should the future of psychiatry rely on evidence generated now?"
In a series of articles Nassir Ghaemi tries to justify Evidence Based Medicine in psychiatry; specifically, the primacy of evidence over theories or models.
Ghaemi's says clinical realities are more important than theories, and EBM allows for the study of clinical realities. Any deficiencies in the evidence-- confounding bias, diagnostic uncertainties, etc, are really a problem about the application of the studies, and not about the possibility of EBM in psychiatry. In other words, psychiatry is sound, but we need more and better data.
Ghaemi is arguing for an empiricist's approach, as opposed to a top-down theoretical approach-- one that starts with a theory, with concepts, and either ignores evidence or bends the evidence to conform to an existing theory. (His eg: psychoanalysis.)
II.
He asserts that the foundation is clinical observation, which is then studied further using scientific methods. For example, hormone replacement therapy done on thousands, later determined to be ineffective if not harmful. See? More evidence, better practice.
Hormone replacement therapy was the cure for many female illnesses. Decades of experience with millions of patients, huge observational studies with thousands of subjects, and the almost unanimous consensus of experts all came to naught when randomized studies proved the futility of the belief in that treatment (not to mention its carcinogenic harm).
A moment's reflection shows this argument to be illogical. Hormone replacement therapy did work. It had great risks, but to say that it was a failure is wrong. "It was the cure for many female illnesses, but..." So it was adequately tested in all of them, indicating its futility?
Ghaemi would respond that we would need more studies to determine the efficacy and risks in each indication, in each population. That would be right, but that's not what happened: doctors generalized the failure of a medication based on the outcomes in a restricted symptom set.
"Not better than placebo" is another false start. If a medication and a placebo both show a 25% response rate, it doesn't mean the drug is "no better than placebo": what if two different 25%s responded? Would the group that "responded" to the drug also have responded if crossed over to placebo?
The same is true for symptoms: if placebo and drug both result in a 25% reduction in symptom severity, it neither means the drug is a failure, nor, indeed, that the placebo is a placebo.
"Well, we'd need more and better studies." Of course. I'll wait at the bar.
III. Pay No Attention To The....
This story is apocryphal, so consider it a parable:
Pierre Eymard and friends were studying novel antieplieptics, and used dipropylacetic acid (a solvent) as both an intravenous vehicle for the drug, and as placebo. They observed that the placebo worked, too, preventing seizures at the higher concentrations.
If this had been a phenotype without visible effects, it would have been a perfectly ordinary conclusion that drug was not better than dipropylacetic acid-- aka Depakote.
"But placebos nowadays are inert." Is the fluorescent lighting in the office a placebo-- maybe it makes the anxious depressed patients more anxious? "Come on, those studies are from 1990." I guess that means the question had been satisfactorily resolved, requiring no further investigation?
"Well, more studies are needed..." Tell the bartender I take my rum straight.
IV. Improvement In Depression
Take the simple example of depression, as measured by the popular Hamilton Scale. The scale measures insomnia and weight loss, but not hypersomnia and weight gain. Using this scale, a patient who sleeps too much and eats too much is less depressed than someone who sleeps too little and has lost weight. And, any drug that fixes sleep and makes you gain weight has an advantage over drugs that don't. In fact, a third to half of the improvement on the Hamilton could be accomplished by improved sleep and appetite alone. Go Zyprexa.
Note that the results of drug trials are reported only as total scores; you have no idea what symptoms the drug is fixing, or not. "But it's not powered to detect those effects." Ok, but it isn't designed to tell you if it's an "antidepressant," either; only if it lowers scores on the Hamilton in this single sample group.
"We need more studies, more scales." But in the meantime we're left with "X is an effective antidepressant."
The standard academic line is that the evidence indicates all antidepressants are generally equally efficacious. Think about this. Have you ever met a single patient for whom that was true?
For a hundred reasons, none of that data applies to the patient sitting in front of you, yet it is the best information you have to go on. You have nothing else. Ok, go. The problem is not in the application of evidence to your patients, the problem is in the application of the theory that the evidence is creating in you onto your patient.
The Tohen data may show that Zyprexa is efficacious in depression, but when you prescribe it you are thinking, "antipsychotics in general are efficacious in depression in general, and I need a sedating one." You are doomed.
"But more studies are needed..." I look forward to reading them, or passing out, whichever comes first.
V.
That's the issue. In order for this to be a science, there has to be a testable hypothesis. There isn't any of that in psychiatry.
Example: antidepressant induced mania is the kind of testable question amenable to scientific investigation. Do they cause it, or not? But it's not easily answered, indeed, it cannot be answered. Which antidepressant? What's an "antidepressant?" Cymbalta, Pamelor, or Seroquel? Or CBT? What about semen? Which symptom of depression is it treating or not treating that allows you to call it an "antidepressant"? You could do a billion studies on every drug ever made, in every description of "depression" imaginable and that would only allow you to say, "ah, I know the answer in a billion specific situations" but would still have no insights into the nature of the phenomenon.
Why don't all antidepressants cause it? "Well, there are exceptions to the rule." You've been infected: the rule is meaningless.
When you give someone Paxil, you are playing the odds: this worked in 25% of the guys we gave it to in 1997. There's nothing wrong with doing this, that's what you're supposed to do; but it does not allow you to speculate on either the nature "antidepressants" or "depression."
Simply put, the problem with "Evidenced Based Medicine" isn't the evidence, but the "based." Existing evidence can guide practice, but cannot be used to create a general practice model. "Mood stabilizers are the cornerstone of treatment in bipolar disorder." While I have no idea what you're talking about, I'm certain to be punished if I don't oblige.
In physics, such empty theories don't hurt anyone, and there's value in the theory itself. String theory may turn out to be wrong, but you at least are going to be really good at math. Okay.
But in psychiatry these empty questions and empty answers are still applied to social concepts as if they carried the weight of scientific validity. The question of "antidepressant induced mania" may be empty, but that doesn't stop the legal system from using it. You can't imagine the defense proposing that at the precise moment of the murder, the universe split into two equivalent eigenstates and the defendant, in this eigenstate, had been already determined to have had to have been committing the act of murder, which he already had even before he started; but that explanation carries considerably more scientific merit than the psychiatric one, by which I mean both have absolutely none at all. Wovon man nicht sprechen kann...
VI. Here There Must Be Dragons
"But you're not really arguing against the primacy of empirical evidence, you're arguing about the misapplication of that evidence. You're arguing against incorrect generalizations, against lumping data sets together to invent a clinical model."
No, it's much worse than that.
The problem isn't that the data is sound, but we shouldn't hastily extrapolate or generalize from that evidence. The problem is it is impossible not to do this.
The first reason is because of the use of words. "I met a blonde girl last night." Oh really? he replied knowingly. The words "depression" or "bipolar" or "antidepressant" all existed before we started using them. "Bi" and "anti" and "relapse vs. recurrence" all have connotations that may have no relevance to the way they are used now, yet those connotations will inevitably surface. It seems as though "evidence based medicine" has discovered that the antipsychotic Seroquel is also good for depression, but that's not science, it's an accident of history: 15 years ago the molecule could have been tested for depression, only to now be approved for psychosis. The evidence, the science, may be neutral on the drug's identity, but it will never be equivalent to an SSRI in your mind. In order for it to be successfully rebranded, everyone who learned it the other way has to die.
Second, the explicit purpose of psychiatry is to apply the discoveries immediately. The hasty generalizations and applications aren't a byproduct of the field, they're the whole point. We don't have time to wait for a physiological explanation for bipolar, we have to get people better now. But while extrapolating from "kindling theory" or one antiepileptic's mania data to a theory of "mood stabilization" is a noble attempt, it's still wrong.
Third, our brains have no alternative but to assume causality. No matter how many times you say "X is associated with Y" we will think "X causes Y." Academics like to point this mistake out when residents do it, but everyone is guilty of it, all the time. This isn't a criticism of human laziness, this is how we're designed. Our brains can't help it, they do not allow for a vacuum, they force causality. The brain may not let it become conscious, but you'll act like it, breathe like it. Even when you know it's wrong. I know how a mobile phone works, yet I still yell louder when it starts breaking up. The only way to stop assuming one explanation is to be given another explanation.
Fourth, while 1 + (-1) = 0, a positive study is never completely refuted by a negative one-- and vise versa. Even if studies are of identical design in the exact same patients, the marketing of a study-- who wrote it, where it was published, how many "thought leaders" got behind it, how many pages, tables-- all of this supersedes the content. Even if you successfully appraise a single study on its merits, the rest of the vastness of psychiatric literature is available to you only by rumor. When the fashion turns away from SSRIs and North Face jackets, you'll frown when they occasionally reappear.
Fifth, simply asking the question often overwhelms the evidence. If you ask, "does Geodon cause QTc prolongation?" it immediately stops mattering whether the evidence shows conclusively that it doesn't, or that it was a mistake; it even stops mattering whether you even understand what "QTc prolongation" means. The moment the question is asked, you are forever condemned to pause before prescribing Geodon.
VII.
I've avoided discussions about groupthink or specific biases in studies as they are incidental to the fundamental problem of psychiatry, which is a faith in the primacy of evidence in the absence of any interest in a theory of mind. Evidence can, should, and does inform practice, and none of its shortcomings should change the way we use it today. But faith in evidence hasn't moved psychiatry forward at all in 50 years. More evidence will not fix this, because there's nothing guiding the evidence.
The unfortunate truth is that most of the evidence in "evidenced based medicine" is at best too limited for general application, at worst wrong. Many of you will reflexively recoil from this, retreating from the vertigo to the crowded safety of your peers, journals, and false idols, but this empiricism is only another kind of apostasy. Repent.
59 Comments