September 30, 2007

Ten Things Wrong With Medical Journals


I know, right?  Only ten?



This is how references are done now:




















This is madness.

I could make a career out of exposing references that have nothing to do with, or directly contradict, the referenced statement. (Here's one.)  This system of referencing makes it very hard to do this-- and I wonder if that's not the point.

It is, by contrast, very easy to link the exact article referenced-- even the exact page in the article.  Even Jezebel does this.

Online-- assuming it's not a pdf even though this shouldn't matter-- when you click on the superscript it takes you to the references, which has a further link to the actual article (but not the statement.)  Ok-- why the extra step?

Note that the way references are done here is antithetical to science.  Look at the reference pictured above.  What do you see?  What is important?  What you see are the authors and the journal, not the scientific content.  That's what's implied to be important.  We're supposed to accept the science of a statement by the force of the author and journal?  But that speaks to my later point about bias.


Why should government or Pharma sponsored research require me to pay someone else (the Journal) for access to that information?  And why only licensed academics?  So, if I'm a welder in Kentucky, I can't know what's really up with Depakote for bipolar?  I have to read some nut blogger? 

It is a simple process for a Journal to host online all articles as free text.  Or, better, scientists can publish their work on their own site (hosted by a university, etc, if necessary.)

Neither are Journals necessary as repositories of vetted information.  There are numerous ways these scattered articles can be collated, packaged and even summarized for easy use.  Slashdot, Digg, and others are very effective in this regard; and something similar can be done with science.  I know, Digg can be gamed. What, Journal of Clinical Psychiatry can't?


Where's The Raw Data?

Rephrasing from above: why am I not allowed to see the raw data from a government sponsored study?  (And from Pharma-- if they agree to do a study, then they must agree to make all data public.)

You may have heard that there are rumblings about making this data accessible-- but not to everyone; only to those with appropriate access (academics, etc.)  Again-- why?



This is a common lament, but it misses the point: it is artificially slow.  As in on purpose.

Articles that are submitted for peer review should simply be published in a "pending peer review" section.  Other sciences already do this.  To the criticism that doctors may act on unreviewed science, it should be noted that citing "personal communication" (e.g. an email) as a reference is perfectly acceptable.  Is that safer? Oh, and about that peer review:


Peer Review

Even with the most malicious intentions, it would be nearly impossible to create a worse system than peer review.   Peer review does not have the potential for bias; it is specifically designed to retain bias, and to maintain the primacy of subjective opinion over objective findings.  The only people who support peer review are other peer reviewers. If necessary, money should be diverted from pediatric AIDS research to help stop to this oligochracy.  It's that important.

In medicine, "peer review" is the editor of a journal and three other doctors- that the author suggests as reviewers.   While ostensibly the author's identity is unknown to the reviewers, in practice it is simple to determine authorship (type of research; meetings; and even document metadata.)  Oh, and the editor knows who you are.

Most people think peer review is some infallible system for evaluating knowledge.  It's not.  Here's what peer review does not do: it does not try to verify the accuracy of the content.  They do not have access to the raw data.  They don't re-run the statistical calculations to see if they're correct.  They don't look up references to see if they are appropriate/accurate.

So what do they do? They look for study "importance" and "relevance."  You know what that means?  Nothing.  It means four people think the article is important.  Imagine the four members of the current Administration "peer reviewing" news stories for the NY Times.   

On the force of the recommendation of these three reviewers and the editor-- who, by the way, decides whether to even send it to reviewers at all, or reject it outright-- the article gets published or not.  And there is no right to an appeal.

Imagine a movie gets previewed by four people who decide if the movie is important or not, and whether it will play in theatres.  You know what you get?  Notes On A Scandal, that's what.  And riots.

The peer review system also promotes the perpetuation of biases.  Doctors are subtly pressed into writing articles about certain topics-- consider the Depakote madness of 2000-2004; the noradrenergic hypothesis of depression in the 70s (where'd that go?); and how every issue of BMJ has an article on war.   (Except July 2008: that was the month they wrote about whether to boycott Israeli academic institutions.  Ok.)  Academic careers are made, in part, by the number and quality (i.e. journal) of publications, which will be influenced by what they think certain journals would publish. "My research focuses on things my Chairman really likes."  Can't wait to read more about evolutionary psychology.


Lack of Debate

There is no way to have a meaningful debate about an article within the Journal system.  How do you crowdsource a medical study?  As an example: if I find a logical error in an article (e.g. mistaking correlation for causation) I can only point this out by writing a "Letter To The Editor," which, you will be surprised to learn, goes to the editor.

Even if it is published, my Letter will have little attention.  But while anyone smart enough can critique a study, only an academic can write the Letter.

It is unnecessary to point out that the rest of the internet-- including news-- works very differently.

Disclosure of Conflict of Interest

Almost completely invalid for its intended purpose.

If a doctor does a promotional program, a "drug talk", he has to disclose the relationship.  But if a doctor is dating a drug rep-- that relationship he doesn't need to disclose. 

Even more strange is that these are commercial conflicts of interest, only.  If you are a communist, or Priest of Scientology, or a serial pedophile; these are not disclosures, even if your article happens to be "Incidence Of Pedophilia Among Communists."  Neither is being funded by the NIH (any surprise that NIH studies always find that the generic is the best?)  Or being married to the Chair at Harvard. Or having a son on the drug.

Aren't personal beliefs a bias?

To single those out commercial interests as somehow more damaging, more biasing than any others is preposterous.  It's not a false sense of security; it's a deliberate misdirection from all the other things that actually bias science.  And it sidesteps the entire point of scientific articles-- if they are truly scientific, if the articles were truly "peer reviewed"-- it shouldn't matter what your biases are.  I could own Pfizer.  The article on Zoloft should be able to stand on its own.

It's worth observing that the peer reviewers are not asked to disclose any of their commercial interests.



No exposition needed.  Either less words or better words.


Abstracts As Promos

See this blog post, where it starts out, "I know, right: only ten?" and then you have to click to get the full article?  So my promo has really no useful information in it.  You know why?  Because I am a blogger, that's why.

Contrast that with the abstract from an important study on Lamictal for maintenance treatment in bipolar (emphasis mine):

Conclusions  Both lamotrigine and lithium were superior to placebo for the prevention of relapse or recurrence of mood episodes in patients with bipolar I disorder who had recently experienced a manic or hypomanic episode. The results indicate that lamotrigine is an effective, well-tolerated maintenance treatment for bipolar disorder, particularly for prophylaxis of depression.

The Conclusions seem to say Lamictal is good for preventing "mood episodes", mania and depression-- is there any other way to interpret it?  In fact, this study shows it is only good for preventing depression, not mania at all.  Why is it written this way?  Because the authors want to advance the idea that Lamictal is a "mood stabilizer" and not what it actually is: an antidepressant.

You have to understand that most doctors do not read the study, they don't even read the abstract-- they skim the abstract.   For this reason, the abstract has to be an accurate summary of the article, not a promo for an idea. But that's why it is written this way; it's not about the findings, it's about the authors' agenda.

What's stupid about this is that negative findings are as important to a clinician as positive findings.  They are less important, of course, to academics whose careers depend on positive findings, and the drug companies who sponsor them.


Pick up a medical journal-- inside you will see drug ads.   I haven't heard many people complain that this influences the science in the journals, the way authors' "conflicts of interest" is supposed to.  But before you respond, consider that  the ads are only for one product per class.  For example, in the NEJM, there is a two-page, full color ad for Lipitor, but none for any other cholesterol drug.  Oh, my mistake-- there are two, two page ads for Lipitor (running $32,000 per issue).  Same with one inhaled insulin; one antidepressant (Effexor); one sleeper (Rozerem) etc, etc.  If having ties to Pharma influences the outcome of science, what is the effect of having a financial ties to only one Pharma company per class?  (1)

Inflation In Studies

Reducing the value of something by increasing its availability is inflation.  This is magnified when the thing in question didn't have much value to begin with.  Three strategies:

  • The same data set, or the same thesis, is reworked into several different articles for different publications.  This may seem a benign way to pad your CV, but what it does is fool people into thinking something has more support than it actually does.  This is precisely how Depakote became a bipolar colossus.  
  • A topic is investigated multiple times, when even one time was too many.
  • A finding is described as novel or at least interesting, when it had already been published years earlier by a less "important" researcher.

(nod to Glen for this one)



1. I wondered if psychiatry journals, having a more limited range (e.g. no insulin ads) would have broader coverage of companies.  They did, sort of.  Archives, CNS Spectrums, Primary Psychiatry, etc, all had multiple antipsychotic ads (never more than three brands, though) but always only one antidepressant.  Not sure what to make of that.