June 24, 2009

It's Not A Lie If It's True

Starring Megan Fox.

The Brown University Psychopharmacology Update ("The Premier Monthly Forum About The Use Of Psychotropic Medications") reviews studies.  Their angle is that they don't have an angle, they are objective, they don't take Pharma money, and the editor has a beard.  

"We don't usually report on industry-funded studies..." but "the findings are compelling."

Dig deeper, my friend, dig deeper.

I.

Randomized double blind multicenter trial of Zyprexa vs. Abilify.  The study found that Zyprexa 15mg was better than Abilify 20mg, but that (from the study):

More patients experienced significant weight gain at Week 26 with olanzapine (40%) than with aripiprazole (21%; p < .05 [weighted generalized estimating equation analysis])

Brown University Psychopharm Updat
e writes:

And the bonus point: the sponsor of the study, Bristol-Myers Squibb, is the manufacturer of aripiprazole [Abilify].  So, no surprise that olanzapine [Zyprexa] results in more weight gain than the sponsor's product, but surprising indeed that it is more effective.

Does he mean he's surprised that BMS didn't fudge the study?  Come on, does he think that somehow BMS can alter the results of a double blind trial?  How?  Remote viewing?  If the CIA couldn't get that to work, what chance does BMS?  And if they could, do you think they'd be wasting their time with Abilify?

What's probably surprising to him, I think, is that untouched data that was negative for Abilify  actually got reported for all to see.  Yes, that is surprising.

Turns out, he was right to be suspicious.


II.


I had a thought: this is a study that exists-- e.g.in a public database-- but it was sponsored by BMS. It shows Zyprexa is better but Abilify is safer.   In which company's marketing materials does this study appear?

The answer is, in Lilly's.  The Zyprexa promotional materials show Zyprexa's slightly better efficacy, but considerably higher rates of weight gain.  Take a look:

zyprexa weight slide.jpg


Notice anything weird?  The slide data doesn't match the study data.

The impulse is to say Lilly found a way to minimize the weight gain.  10% difference may not seem like much, but it is a reduction of 25% over the published study.  Companies kill to get that kind of reduction.  But, believe it or not, that's impossible: Lilly is not allowed to exaggerate or lie. The FDA signs off on these materials-- a dozen scientists at Lilly and the FDA have reviewed this slide and the data.  They wouldn't be able to get away with fancy spin to drive the numbers down in their promotional materials. 

It took me a long time to figure it out: the slide assumes an LOCF analysis (common in psychiatry), while the study uses "weighted generalized estimating analysis"/MMRM.  Do you know what the difference is?  Exactly.

Because here's the thing: the FDA reviews and signs off on all promotional material, but they do not have any say at all into the actual published study.  For all they know it could be written in crayon or sheep's blood.   I know I'm going to sound like a broken record, but the weak link in the chain of science isn't Pharma, it's academia.

III.

LOCF-- last observation carried forward-- means that even when a patient drops out of a study, whatever data he did generate stays in.  "If it happened, it happened." 

Psychiatry studies typically use LOCF, it is the default standard.  More importantly: psychiatrists assume LOCF is being used in the study they are (not) reading..

No one knows what "weighted generalized estimating equations" are.  Take this study to the  nearest psychiatrist, ask him if he knows. If he says he does, smack him in the face with it, he's lying.  In fairness, doctors don't expect this kind of a curve ball; and study authors must be aware of their audience.  The purpose of the publication-- not the promotional material-- is to inform us.  It is to tell us what really happened.  They are supposed to make it as easy to understand as possible, and not trick us.

"It's not a trick, we told you right there in the study." 

Telling me is not the same as telling me.

Exhibit A:

A post hoc longitudinal mixed-model analysis was performed for mean change from baseline... A spatial power covariance matrix was used to model the correlation between measurements...

Exhibit B: Brown University Psychopharmacology Update didn't notice it either.

Viagra may have good efficacy, but if the results are published in The American Journal Of Geriatrics I don't expect you to have tested it on a sample of 17 year old boys who just watched Megan Fox in anything.  And if you did test it in them, you should probably include a picture.


mf.jpg
A picture of Megan Fox, for no reason at all


Oh, it's honest: the study authors certainly weren't lying.  But everyone must know that no one is going to figure this out on their own, right?


IV. 

Someone will inevitably email me to correct me that MMRM is completely legitimate, betting I don't understand clinical trial design and statistics.  That would be a sucker's bet.

The authors of the study didn't design a study with an unusual analysis; they designed a perfectly ordinary study, the kind everyone would expect, using LOCF, that they then later decided to analyze differently using something most people have never heard of.  You would only know this if you went to the BMS clinical trial registry-- the thing everyone was demanding Pharma do that now no one bothers to use--  and looked it up (138003.pdf),  then spent time comparing the two documents.   Good luck to the rest of you people who actually have a life.

Or-- and this is sort of the point, sad in its own way but true nonetheless-- you could have just looked at the Lilly slide.





Comments

Good catch.The ... (Below threshold)

June 25, 2009 4:33 AM | Posted by SusanC: | Reply

Good catch.

The obvious way to cheat at this kind of thing is multiple testing: do lots of studies, but only publicize the results when you get the answer you wanted. At p=0.05, you only have to do 20 or so studies for each one you publish.

Slightly more subtle: choice of variable. You think up 20 or so different possible side-effects, and publish that one that's statistically significant at P=0.05.

But keeping the data the same, and choosing the statistical test that gives the answer you want, that's more advanced. (But well-known to statisticians).

Vote up Vote down Report this comment Score: 5 (7 votes cast)
Yes, you gotta dig deep to ... (Below threshold)

June 25, 2009 9:54 AM | Posted by medsvstherapy: | Reply

Yes, you gotta dig deep to figure out how they are deceiving you. Along with the issue of how to handle missing data is a more thorny problems: the randomized controlled trial design is based on a handful of theoretical ideas concerning its value and necessary design in serving the goal of evaluating the merit of suspected causal relations - and in that whole range of ideas, those "assumptions" they were always talking about in stats class, "missing data" is not one of them. The logic of the RCT does not include missing data. If you have missing data, you have begun to stray from the orthodoxy, and thus the power, of the RCT. And, you have introduced the spectre of bias regardless of how it might enter or how you might adjust your analysis to compensate for missing data. It is like forgetting the condom once in a hundred encounters. You are "safer," not "safe." You can use a maximum likelihood estimator to generate testable parameters of slope for symptom remission with drug A versus drug B, and your answer may closely mirror the true reality, and your p may be less than point oh-five, but if data is missing, bias has entered somewhere and your result has been compromised. Once you grasp this idea, you are in a position to automatically look at any RCT result with at least modest skepticism, at best 14K gold versus a 24K "gold standard." And if a pharmaceutical company sponsored the RCT, suspect gold plate.

Vote up Vote down Report this comment Score: 0 (0 votes cast)
Why is more research not do... (Below threshold)

June 25, 2009 2:46 PM | Posted by Anonymous: | Reply

Why is more research not done on identifying and treating thyroid problems? No money in it? After 20+ years of Drs shoving antidepressants at me and ignoring my thyroid symptoms because out-of-"normal"-range levels didn't show on a blood test until I was almost comatose, I am finally getting some relief from the myriad of symptoms that made me a social pariah and target - depression, low energy, and unrelenting weight gain. I had all the other not-so-visible symptoms also, clotty periods, premature gray hair, hair loss, skin/lymph issues, etc. And a family history I found out about 5 years ago.

You'd think they would consider the symptoms, but they don't, only the blood test which has a "normal" range as wide as the Grand Canyon. Yet with no test to measure the need and no knowledge of how they actually work, antidepressants are still handed out like candy. I believe all of my horrible side effects, which were sometimes worse than the "illness," were caused by the drugs making my hypothyroid condition worse. 20+ years of a life that would have been radically different if any Dr. had gone down the thyroid path instead of the Pharma kickback route.

The first time I saw the Abilify commercial I thought it was a Saturday Night Live spoof. Sadly, it is not.

Vote up Vote down Report this comment Score: 1 (3 votes cast)
I whole heartedly agree wit... (Below threshold)

June 25, 2009 5:54 PM | Posted, in reply to Anonymous's comment, by Chad: | Reply

I whole heartedly agree with your more generalized point. Medicine in general seems geared towards the most obvious, quick solution; a pill. After working in Pharma for 8 years I got so incredibly tired of the patients who actively seek out that type of treatment. In your case you provide a great example of why it is so important to actually treat a patient, not just the current ailment.

This blog is geared towards psychiatric medications but it applies across the board. Anywhere from Antibiotics to ED medications. Got the sniffles? Clearly Augmentin will clear it up. No need to figure out if it's a bacterial infection, and the fact that it coincidentally will clear up in 3 or 4 days anyway doesn't mean anything. No luck with the ladies because "it" ain't workin? Well here's a pill. Nah, don't worry, it's normal to have that happen, no need to get checked out for really serious cardiovascular problems and treat those instead.

Vote up Vote down Report this comment Score: 0 (0 votes cast)
I thought LOCF was banned f... (Below threshold)

June 25, 2009 9:15 PM | Posted by Anonymous: | Reply

I thought LOCF was banned from FDA studies now.

Vote up Vote down Report this comment Score: 0 (0 votes cast)
Alone's response: not ba... (Below threshold)

June 25, 2009 10:10 PM | Posted, in reply to Anonymous's comment, by Alone: | Reply

Alone's response: not banned, but they have begun to realize that mixed models can have advantages over LOCF in certain analyses.

If 10% of people drop out, we can carry their data forward; or, we can make some assumptions about how they might have responded: for example, take the trend of their data up until their dropout, and extend it along its trajectory to the end of the study.

But neither mixed models nor LOCF are particularly accurate, or informative, when a lot of people drop out-- say, 50%. In those cases, you may as well design a different study.

But back to the point of the post: it actually doesn't matter if mixed models are infinitely superior to LOCF, because no doctor has any way of judging the results of such a study. Metric may be superior to feet and inches, but if I tell you it's 12,000 meters away, do you have any idea if you should walk, drive, or fly?

I'm sure the study authors were trying to be all scientific. But in using "superior analysis" (which, BTW, is not actually superior to LOCF in this study) that will be misinterpreted by the readers ("I assumed it was in feet..?") did they put more accurate information in your head, or less accurate?

Vote up Vote down Report this comment Score: 1 (1 votes cast)
I see they have finally fou... (Below threshold)

June 26, 2009 6:35 AM | Posted by Whatever: | Reply

I see they have finally found a worthy replacement for Angelina Jolie.

Congrats to everyone who had the b to start shorting her back in 2007.

Vote up Vote down Report this comment Score: 1 (1 votes cast)
Megan Fox picture should be... (Below threshold)

June 30, 2009 3:29 PM | Posted by Ed Campion: | Reply

Megan Fox picture should be bigger.

Vote up Vote down Report this comment Score: 1 (1 votes cast)
I want need study !!... (Below threshold)

February 18, 2013 2:44 AM | Posted by online free paper samples: | Reply

I want need study !!

Vote up Vote down Report this comment Score: 0 (0 votes cast)