June 24, 2009

It's Not A Lie If It's True

Starring Megan Fox.

The Brown University Psychopharmacology Update ("The Premier Monthly Forum About The Use Of Psychotropic Medications") reviews studies.  Their angle is that they don't have an angle, they are objective, they don't take Pharma money, and the editor has a beard.  

"We don't usually report on industry-funded studies..." but "the findings are compelling."

Dig deeper, my friend, dig deeper.

I.

Randomized double blind multicenter trial of Zyprexa vs. Abilify.  The study found that Zyprexa 15mg was better than Abilify 20mg, but that (from the study):

More patients experienced significant weight gain at Week 26 with olanzapine (40%) than with aripiprazole (21%; p < .05 [weighted generalized estimating equation analysis])

Brown University Psychopharm Updat
e writes:

And the bonus point: the sponsor of the study, Bristol-Myers Squibb, is the manufacturer of aripiprazole [Abilify].  So, no surprise that olanzapine [Zyprexa] results in more weight gain than the sponsor's product, but surprising indeed that it is more effective.

Does he mean he's surprised that BMS didn't fudge the study?  Come on, does he think that somehow BMS can alter the results of a double blind trial?  How?  Remote viewing?  If the CIA couldn't get that to work, what chance does BMS?  And if they could, do you think they'd be wasting their time with Abilify?

What's probably surprising to him, I think, is that untouched data that was negative for Abilify  actually got reported for all to see.  Yes, that is surprising.

Turns out, he was right to be suspicious.


II.


I had a thought: this is a study that exists-- e.g.in a public database-- but it was sponsored by BMS. It shows Zyprexa is better but Abilify is safer.   In which company's marketing materials does this study appear?

The answer is, in Lilly's.  The Zyprexa promotional materials show Zyprexa's slightly better efficacy, but considerably higher rates of weight gain.  Take a look:

zyprexa weight slide.jpg


Notice anything weird?  The slide data doesn't match the study data.

The impulse is to say Lilly found a way to minimize the weight gain.  10% difference may not seem like much, but it is a reduction of 25% over the published study.  Companies kill to get that kind of reduction.  But, believe it or not, that's impossible: Lilly is not allowed to exaggerate or lie. The FDA signs off on these materials-- a dozen scientists at Lilly and the FDA have reviewed this slide and the data.  They wouldn't be able to get away with fancy spin to drive the numbers down in their promotional materials. 

It took me a long time to figure it out: the slide assumes an LOCF analysis (common in psychiatry), while the study uses "weighted generalized estimating analysis"/MMRM.  Do you know what the difference is?  Exactly.

Because here's the thing: the FDA reviews and signs off on all promotional material, but they do not have any say at all into the actual published study.  For all they know it could be written in crayon or sheep's blood.   I know I'm going to sound like a broken record, but the weak link in the chain of science isn't Pharma, it's academia.

III.

LOCF-- last observation carried forward-- means that even when a patient drops out of a study, whatever data he did generate stays in.  "If it happened, it happened." 

Psychiatry studies typically use LOCF, it is the default standard.  More importantly: psychiatrists assume LOCF is being used in the study they are (not) reading..

No one knows what "weighted generalized estimating equations" are.  Take this study to the  nearest psychiatrist, ask him if he knows. If he says he does, smack him in the face with it, he's lying.  In fairness, doctors don't expect this kind of a curve ball; and study authors must be aware of their audience.  The purpose of the publication-- not the promotional material-- is to inform us.  It is to tell us what really happened.  They are supposed to make it as easy to understand as possible, and not trick us.

"It's not a trick, we told you right there in the study." 

Telling me is not the same as telling me.

Exhibit A:

A post hoc longitudinal mixed-model analysis was performed for mean change from baseline... A spatial power covariance matrix was used to model the correlation between measurements...

Exhibit B: Brown University Psychopharmacology Update didn't notice it either.

Viagra may have good efficacy, but if the results are published in The American Journal Of Geriatrics I don't expect you to have tested it on a sample of 17 year old boys who just watched Megan Fox in anything.  And if you did test it in them, you should probably include a picture.


mf.jpg
A picture of Megan Fox, for no reason at all


Oh, it's honest: the study authors certainly weren't lying.  But everyone must know that no one is going to figure this out on their own, right?


IV. 

Someone will inevitably email me to correct me that MMRM is completely legitimate, betting I don't understand clinical trial design and statistics.  That would be a sucker's bet.

The authors of the study didn't design a study with an unusual analysis; they designed a perfectly ordinary study, the kind everyone would expect, using LOCF, that they then later decided to analyze differently using something most people have never heard of.  You would only know this if you went to the BMS clinical trial registry-- the thing everyone was demanding Pharma do that now no one bothers to use--  and looked it up (138003.pdf),  then spent time comparing the two documents.   Good luck to the rest of you people who actually have a life.

Or-- and this is sort of the point, sad in its own way but true nonetheless-- you could have just looked at the Lilly slide.





9 Comments