When patients complain about doctors, it's usually about overcharging and undercaring. When doctors complain about doctors, however, it's usually about those with "loose" practice, especially in the inner cities , who seem to overprescribe Xanax and Percocet.
But let's ask a different question: what would happen if all of these doctors disappeared? If there was no fast and easy way to get prescribed legal Xanax, would all the Xanax seekers just disappear?
In large part, many psychiatrists and primary docs have the luxury of proclaiming that they "don't give out Xanax and Percocet" because there is somewhere else for those patients to go. Dr. Smith from University Clinic doesn't have to haggle over #10 Percocets because the patients can go to other doctors who are much more-- comfortable, let's say-- giving out #90 Percocets a month.
I was trying to think of an analogy. Black market jumped to mind, but these items aren't illegal nor illegally obtained. Surprisingly, the best analogy I found was illegal immigration.
"Xanax, Vicodin, Percocet, Ritalin and Valium" ››
I'm off for two weeks, taking the opportunity to upgrade the computers/monitors and plan my next move. I will also be starting another blog under another alias. I'll reveal it as mine if it takes off.
Also, to all those who emailed me about the Time Person of the Year post: thanks; it wasn't Photoshop but MS Paint; I have nothing against Grossman at all, I loved his King piece, the piece wasn't about Grossman, it was about us, society, our purposeful alienation from each other; I changed the screen to blue to reference the Blue Screen of Death; no, "Go Fuck Yourself" wasn't supposed to be (only) mean, it was a double entendre: narcissism--> self love--> "Go Fuck Yourself."
This slide-, taken from a drug company program- like many others-- shows that using a mood stabilizer + antipsychotic is better than mood stabilizer alone.
Look carefully. This is what is wrong with psychiatry.
As you can see, Risperdal + Depakote (orange) is better than Depakote alone (blue.)
This is not a finding unique to Risperdal. Every antipsychotic has virtually identical data for adjunctive treatment, which is good, because they shouldn't have different efficacies.
So given that 8 other drugs have identical findings, these data suggest that, essentially, two drugs are better than one.
That's obvious, right? That's what the picture shows? Well, here's what's wrong with psychiatry: without looking at the slide, tell me what the y-axis was. Write it here:______________
The problem in psychiatry is that no one ever looks at the y-axis. We assume that the y-axis is a good one, that whatever measure used is worthwhile. We assume that the y-axis has been vetted: by the authors of the study, by the reviewers of the article, by the editors of the journal, and by at least some of the readers. So we focus instead on statistical significance, study design, etc. Well, I'm here to tell you: don't trust that anyone has vetted at anything.
The y-axis here is "% of patients with a YMRS <12." What's a YMRS? A mania scale. But what is a 12? Is it high? Low? What's the maximum score? What counts as manic? What questions does the YMRS ask, how does it measure the answers? You don't know? Again-- we figure someone else vetted it. The YMRS is a good scale because that's what scale we use.
Forget about the YMRS-- what does "% of patients" mean? If this is an efficacy in adjunct treatment study, why not have the y-axis be just the YMRS, to show how much it went down with one or two drugs?
Because that's the trick. This y-axis doesn't say "people got more better on two drugs." It says, "more people on two drugs got better."
Pretend you have a room with 100 manics. You give them all Depakote, and 30% get better. Now add Risperdal-- another 30% get better. But that doesn't mean the Depakote responders needed Risperdal, or the Risperdal responders needed Depakote-- or the other 40% got anything out of either drug.
It could be true that the effects are additive-- but this study doesn't show that. No study using "% of patients" can speak to synergistic effects. In other words, these studies do not say, "if you don't respond to one drug, add the second." They say, "if you don't respond to one drug, switch to another drug." They don't justify polypharmacy. They require trials at monotherapy.
So you may ask, well, do two drugs lower mania better than one drug, or not?
Depending what week you look at, MS alone reduces the score by 13; two drugs gives you another 3-6 points. On an 11 question scale, rated 0-4. So no, it's not better.
I have yet to meet someone who doesn't interpret these studies as supportive of polypharmacy, and it's not because they aren't critical. The reason for the blindness is the paradigm: they think-- want-- the treatment of bipolar to be the same as the treatment of HIV, or cancer, or pregnancy. The important difference is that these other diseases are binary: once you get them, nothing you do makes it worse. Take pregnancy. If two chemicals combined lower the rate of pregnancy, it is clearly worth it to take both. One chemical might have been enough, but who wants to find out? So you take the risk, and you eat both. This is also my argument against making Plan B OTC. Bipolar is not like this: if one or two drugs in succession fail, or things get bad, you can always resort to polypharmacy later.
Again, it may be true that polypharmacy is necessary. Maybe 3-6 points are needed. Maybe two drugs gets you better faster. But it can't be the default, it can't be your opening volley. Because I can't prove two drugs are better than one, but I can prove they have twice as many side effects, and are twice as expensive.
At minimum, if polypharmacy is successfully used to break an acute episode, you should then try to reduce the dose and/or number of drugs.
Now, here's the homework question: if all these antimanics have about the same efficacy, and polypharmacy should be third or fourth line, why do we start with Depakote? Is it better? Safer? Cheaper? What are the reasons behind our practice?
Tolerance develops to benzodiazepines-- and every other antieplieptic, according to the new Epilepsia article.
In general, efficacy to all AEDs decreases with long term exposure. That's tolerance. If being on an AED reduces seizures by 50%, then tolerance is defined as occuring when you return to less than 50% reduction of symptoms. Thus defined, tolerance (of such severity that increased doses do not help) occurs in 10-50%.
Worse, there appears to be cross tolerance. For example, and likely most significant for psychiatric patients, Depakote "lost >50% of its anticonvulsant efficacy in mice pretreated twice daily for only 3 days with [benzos]."
Why does tolerance occur? On the one hand is the obvious metabolic concern-- autoinduction of hepatic enzymes-- but this is really only relevant with the first generation drugs (and especially CBZ and phenobarbital (these are such powerful inducers of cytochrome enzymes that they actually induce their own metabolism)-- while Depakote is the opposite (inhibitor of cytochromes-- which is why you must reduce the initial doses of Lamictal when given with Depakote, so as not to "overdose" and increase the risk of rash)). On the other hand are pharmacodynamic effects, which are of three types: downregulation of binding sites; functional uncoupling (on the GABA-A receptor, benzo binding has less of a positive allosteric effect on GABA binding); downregulation of or decreased sensitivity of ion channels (for example Neurontin downregulates Ca+ channels, benzos reduce Cl- channels, etc.) Activity on the ion channels (as opposed to receptors) would partially explain cross tolerance since these ion channels are the downstream target of many drugs.
No, wait, there's a fourth reason for "tolerance:" maybe the seizure disorder itself changes over time, so it looks like you became tolerant, but really you have a "new" seizure disorder. This is analogous to bipolar disorder, which evolves over time-- how you present at 25 may be different than 35; your manias are different, etc.
So now we have a problem: is there any reason to think that tolerance to the antimanic/antidepressive effects of AEDs wouldn't occur? If seizures, why not mania? If mania is a strictly biochemical dysfunction in the brain, shouldn't tolerance to its treatment occur? Do we make patients worse by keeping them on the meds? Or at least harder to treat? And if mania isn't strictly biochemical-- if we're allowing that life happens-- do we really believe that a fixed dose of an anti-epileptic administered over years is going to prevent a negative response to a life event? And wait a second-- doesn't mania spontaneously remit even without medication? Shouldn't we just, sort of, help nature along, or even get out of its way?
I'm not saying not to treat-- I'm saying not to overtreat,
A guy is on 1500mg Depakote today. What do you do when the patient relapses? Increase to 2000mg? Then what? When does it stop? When does this practice not ultimately result in polypharmacy?
Any reason-- biochemical or epidemiological, I'll take any offer-- why we should not be treating symptomatically rather than prophylactically? Antimanics when you're manic, then stop them when you're better?
I know everyone thinks Osler helped write the DSM after finding the gene for psychiatry and Hippocrates is jealous because he's balding junior faculty , but perhaps we should go reread The Epidemics and rethink our principles.
Score: 2 (2 votes cast)
In researching something else, I learned that Imitrex may actually treat the headache associated with subarachnoid hemorrhage-- which is a bad thing, because you're still going to die.
Subarachnoid hemorrhage, the "worst headache of your life"-- comes on suddenly, lasts for hours (even days, yes, days), worse in light or with sounds, but not affected by movement-- is the result of an aneurysm (usually middle cerebral artery) popping. CT is positive in 95% of cases if taken early-- the longer time passes, the less sensitive CT becomes.
A case report of a woman whose headache improved with Imitrex (6mg SQ) but still ultimately died. The authors said this was the only case report they found, but in the same issue is another such case report (improved after 6mg SQ and died later that day) , and a year later some British guys reported three other migraine patients who came in with undiagnosed SAH and their headache got better after getting Imitrex. (Two got 6mg, the other got 3x100mg). They were correctly diagnosed only after they came back with headache and meningeal signs, and got CTs.
The editor of the first journal notes that sumatriptan is not "migraine-specific" and is effective in treating other head pains (such as viral meningitis, and, I discovered, orgasm headaches *.) The authors of the earlier SAH report hypothesize that since triptans block transmission at the trigeminal nucleus caudalis, any pain from the meninges should be blocked. (In bacterial meningitis, the pain relief may also be augmented by the 5HT1D and B agonism, which (in mice) reduces inflammation, decreases intracranial pressure, and reduces white blood cells in the CSF(!)) This may be only true in acute meningitis, as failure in two meningitis patients may have been the result of sensitization of the caudalis neurons (where triptans are supopsed to block input) and spontanueous activity. (So get your triptans early.)
The obvious message here, given the efficacy iin SAH with such low doses of Imitrex, is that one should not assume efficacy is diagnostic of a migraine. Triptans seem to be efficacious across a variety of trigeminal neuropathies; which, like everything else in medicine, is good and bad.
* Orgasm headache: apparently triptans can treat or prevent "orgasmic headaches." The funniest line is in the abstract of that paper: "In patients who chose to predict their sexual activity, short-term prophylaxis with oral triptans 30 min before sexual activity might be a therapeutic option.."
Score: 0 (0 votes cast)
Long but necessary.
"The Ten Biggest Mistakes Psychiatrists Make" ››
Previously, I had written an (what I thought to be outstanding) article about suicide documentation. The main point was a refocusing of the note away from Objective and towards Assessment. It now occurs to me that what I was really trying to get at is the lost art of writing a psychiatric formualtion of a patient.
The reason we don't do formulations anymore-- they're not even taught in most residencies, certainly not in mine or now to the residents I supervise-- is because it's not clear what the formulation is supposed to do. Doctors get overwhelmed by the psychodynamics of it and can't seethe practical utility. Someone brought them twenty ingredients but didn't tell them what they were cooking.
A formulation is different than a diagnosis or description of the patient. The formulation seeks to convey the relevant parts of a patient so that you can predict how a patient might behave in future circumstances. By way of example, a formulation is similar to a "profile" in crime movies. When they say things like, "he's going to want to tie the women with piano wires, because he's a schizophrenic who was forced to sleep in a tuba..." that's a formulation (sort of-- you get the idea.)
The formulation helps prediction by linking the various aspects-- seemingly unrelated, perhaps-- of a patient's existence. It's the stuff you know is relevant, but DSM and standard psychiatry have no room for. What does it mean if I tell you an inpatient brought with her fuzzy bunny slippers? That's goes in the formulation. A statement such as, "the strong family history of bipolar disorder, along with his chronic alcohol abuse and prior suicide attempts, and the pending divorce and custody battle, and his recent apostasis from Catholicism put him at higher risk for suicide" is the type of sentence I want in the Assessment-- and it is precisely a short example of a "biopsychosocial" formulation.
Note the importance of having all factors together, as opposed to individually. It sets up the logic; it lets the reader know, immediately and obviously, what you were thinking. This is very different than writing in one part of the note, "Fam Hx: strong bipolar;" and in another part of the note, "Chronic alcohol abuse; history of multiple suicide attempts;" and in another place, "patient divorcing, and custody trial is next month." Putting it that way, in the classic H&P format, forces the reader to have to infer. Put in a biopsychosocial formulation, and the reader gets it instantly without even reading the rest of the H&P. That's what you want.
Interestingly, the term "biopsychosocial" was coined by George Engel, psychoanalyst(?), who in 1977 made the startling observation, "The dominant model of disease today is biomedical, and it leaves no room within its framework for the social, psychological, and behavioral dimensions of illness."
[It] would seem that psychiatry would do well to emulate its sister medical disciplines by finally embracing once and for all the medical model of disease. But I do not accept such a premise. Rather, I contend that all medicine is in crisis, and, further, that medicine's crisis derives from the same basic fault as psychiatry's, namely, adherence to a model of disease no longer adequate for the scientific tasks and social responsibilities of either medicine or psychiatry.
Plus ca change...
Engel, like others, had understood that somatic symptoms such as pain, weakness, etc, and autonomic symptoms such as reflux, tachycardia, etc could be symbolic expressions of emotion or conflict. How could the Objective portion of a note ever explain why you discharged a person with acute bilateral leg paralysis? It can't-- but a biopsychosocial formualtion can.
As per Engel, the main question such a biopsychosocial model seeks to answer is why some patients experience an "illness" while others experience a "problem of living." Importantly, the patient himself doesn't often know: the patient defines it as an illness recursively by whether or not he "needs" a doctor, and not by an actual understanding of what's wrong with him. It's the doctor's job to decide whether it is actually an illness or a life problem, and then properly re-educate and re-train the patient.
Note that in my post about suicide documentation, the hypothetical patient was not malingering. He believed he needed to be hospitalized because he was suicidal. But when you discharge such a patient from the ER, you are thinking that the person will not die-- the suicidality is an expression of something else. This is Engel's dichotomy. The patient thinks one thing, you think another-- it's your job to explain to the patient what's really going on, AND explain to the reader why you did what you did.
Typically, formulations are taught, in my opinion, backwards, so students "don't get it." You're taught to start with what's going on now, then describe what historical factors that made the patient who he is (including genetics, upbringing, social stressors, meds, etc),; then psychodynamic explanations, and then your proposed treatment and how you predict the patient will respond. I think it is easier to go backwards. First, decide what you think is going to happen in the future (will commit suicide, won't relapse, is a mania risk, etc) and then explain what it is about his past and present that makes you think this. In this way, you're writing the formulation with a purpose.
"Joe came to the ER for suicidality after he got drunk after getting divorce papers.
Joe takes rejection very hard, and characteristically when the rejection is new, he doesn't spend time to think things through. He exhibits poor judgment (give examples here or in Objective), is impulsive (examples), and also does things which further reduce his judgment and raise his impulsivity (like get drunk.)
Joe has several narcisissitic features . For example, importantly, his suicidality is directed at his ex-wife. The point of the attempt is that she find out, that she know he is feeling hurt. If it was guaranteed that she would never find out, he would not attempt suicide because it would have lost its meaning. He needs her, or at least someone, to acknowledge his pain, and see him as the person he is trying to portray. As we talked, I made it clear that I did see he was hurt, and I understood the rejection--how it not only was a loss of a wife, but also a hint that he himself was unworthy of her. We discussed that she was entitled to leave him, but that she could not deterine his value."
etc, etc. You see how even without an Objective portion, the narrative in the Assessment is quite clear. The reader understands what you were seeing and thinking.
Score: 4 (4 votes cast)
Addendum 11/15/06: Fair is fair. I found an even better review by one Eric Chudler, PhD at Univ. of Washington, called Neuroscience for Kids. (don't laugh). I didn't review all the links, but it is certainly more comprehensive than what I have here.
You know how everyone says that people go insane when there's a full moon? Well, I looked it up.
Most studies finding a link vbetween violence and the moon were done in the 1970s. For example, a 1978 study found a lunar relationhsip to everything-- suicides, asssaults, MVAs, and psych ER presentations, with both homicides and assaults both occurring more often around the full moon. Then again, you have to be suspicious of any study that actually tells you they actually used a computer.
But by the 1990s, this lunar relationship was on the way out. Consider a 1997 study in Italy found no relationship between community psych contacts and the moon phases. A 1998 Australian study found no relationship between violent episodes in inpatient psychiatric patients and the moon phases. A Spanish 2002 study found no link between ER presentations for violence and the moon's luminosity. A German 2005 study found only the weakest link between completed suicide and the moon (the new moon, mostly.) A 1992 Canadian study reviewed 20 studies covering 30 years and found no link to attempts or completed suicides and lunar phases. And, to prove a point, a gigantic Austrian study in 2003 found no relationship between lunar parameters (phases or sideric) and any ER presentations.
Which brings me to one point-- do Americans do anything other than drug studies? Well, one non-clinical study was done in Texas and found no link between prisoner violence and lunar phases.
So it is with violence and suicide. But what about other behaviors? I haven't had time to investigate the question, but two studies are suggestive. One (British) 2000 study found a slight increase in presentation to family practice clinics during full moons that was not due to psychiatric symptoms. An Austrian 2003 study found a strong relationship between thyroid clinic appointments and dates around the full moon. And a strange (British) 2003 study finding that women called a crisis center more frequently on the new moon.
I did find an interesting (Greek) study finding an excess of seizures on full moons (34% vs. about 21% for the other phases.) Importantly (and in contrast to suggestions by other studies) these were not pseudoseizures, because all patients were monitored. The authors speculate either electromagnetic/gravitational effects (hey, it could happen) or an interaction between the intrinsic seizure threshold and the environment (i.e. you can change your own threshold.)
My interpretation of this is that the moon can't affect your behavior directly (duh), but one's relationship to lunar cycles could influence your behavior. Take the classic wolf and full moon relationship. Prey animals, such as rats, generally reduce their activity during the full moon (don't want to get caught, I guess.) Wild maned wolves (which eat rats) travelled significantly less during the full moon. The authors' explanation was that prey is less available, so wolves would want to conserve energy. Additionally, maybe one reason why so few studies are American is that we have a lot of artificial night light, so the moon has less or no influence, while elsewhere there is less artificial light? Who knows. I'm going to bed.
Score: 6 (6 votes cast)
Here's a question: can an antipsychotic be an antidepressant? Why, or why not?
The correct answer is that the question is invalid, because there is no such thing as an "antipsychotic" or an "antidepressant." We (should) define them based on what they do, not what they are. Therefore, Wellbutrin and Effexor are both antidepressants if and only if they both treat depression-- not because of some element of their pharmacologies, which are anyway different. Strattera, on the other hand-- which has a pharmacology (in some ways) similar to Effexor-- is not an antidepressant, only because it doesn't treat depression.
Following, just because something is called an antidepressant, or antihypertensive, it doesn't necessarily take on all the other properties or side effects of the others in its "class." Not all "antidepressants" have withdrawal syndromes (only SSRIs do). Not all antihypertensives cause urination (only diuretics do.) You wouldn't dare put a "class labeling" on "antihypertensives" of "diuresis."
So you see where I'm going with this-- except you don't.
I've previously yelled about the inanity of "antipsychotic induced diabetes" or "antidepressant induced mania" when they ignore pharmacologies, doses, and, of course, actual data.
But today I saw something that I now understand to be one of the signs of the Apocalypse. It is the new package insert of Seroquel, which just got a new indication for the treatment of bipolar depression. The new PI reads:
Suicidality in children and adolescents - antidepressants increased the risk of suicidal thinking and behavior (4% vs 2% for placebo) in short-term studies of 9 antidepressant drugs in children and adolescents with major depressive disorder and other psychiatric disorders. Patients started on therapy should be observed closely for clinical worsening, suicidality, or unusual changes in behavior. Families and caregivers should be advised of the need for close observation and communication with the prescriber. SEROQUEL® is not approved for use in pediatric patients. (see Boxed Warning)
Stating the obvious: in none of these 9 studies was any patient actually ever on Seroquel; Seroquel itself is not associated with a risk of suicide; it's not even been tested for major depressive disorder; and, well, this isn't very rigorous science, is it?
Just because a is now called an antidepressant, it carries the same risk as the SSRIs? (Whether even SSRIs have this risk is besides the point.) Isn't that, well, racist?
This is not really about preventing suicide. If we were worried about suicide, really, then why 24 hours before the FDA posted this warning, no one cared about Seroquel's doubling of the suicide rate? Oh, because it doesn't actually double the suicide rate? Die.
So the game is clearly not about science, it's about politics, it's about liability, it's about money.
If this was honestly about about protecting children from suicide, we'd shrug our shoulders and say, "well, they're just very, very cautious, so we'll be careful and keep going." But that's not what this is. What this is factually inaccurate, misleading, and therefore more dangerous, more harmful. In a simple example, this warning protects no one for a risk of suicide-- no potentially suicidal patient is going to look at this and say, "well, crap, I'm not taking this." But it may prevent someone from taking it when they could actually benefit. See?
This is Structuralism gone very badly awry, Saussure just bought a pick axe and he's come looking for us all.
Score: 0 (0 votes cast)
You may as well find out what you're up to.
"The Most Prescribed Drugs" ››
I'll give you the punch line first: In each of the Danish, Swedish, Finnish, American, and Canadian studies, appx. 0.4% of breast implant patients killed themselves, representing a two to threefold higher risk than the general population. In some studies, the risk of suicide was increased to 1.5 times for any type of plastic surgery. Getting implants over 40 may also be a risk for suicide.
2761 Danish women who got breast implants from 1973-1995 were compared to 7071 women who got breast reduction, and 11736 who were considered controls. Median age was about 31.
14 (0.5%) breast implants committed suicide, 3 times more than expected (i.e. standardized mortality ratio=3). 7 of them had been previously psychiatrically hospitalized. 220 (8%) of all implants were psychiatrically hospitalized.
22 (0.3%) breast reduction committed suicide, 1.6 times more than expected. 6 of them had been previously psychiatriically hospitalized. 329 (4.7%) of all reductions were previously psych hospitalized.
0 controls committed suicide. 96 (5.5%) were previously psychiatrically hospitalized.
A U.S. study followed 12144 implant patients (mean age 31) and 3614 other plastics patients (mean age 40) from 1970-2002. 29 (0.24%) implant patients suicided vs. 4 (0.1%) other plastics patients. Thus, the 29 suicides were 1.6 times more than expected (SMR=1.6).
Interestingly, the risk of suicide was increased only after ten years; 22/29 died after 10 years. And while the majority killed themselves before 35 (16/29, SMR=1.4), the biggest risk was for >40 year olds. (SMR=3.4)
Really interestingly, the authors found that for breast implants there was no excess risk for any kinds of accidents-- why should there be, they were accidents-- except car accidents. Hmmm. 10 MVA deaths (occurring 15 years post implant) vs. 0 for other plastic surgery. The authors speculate these may not have been accidents.
Swedish study, prospective but no comparator group, of 3521 women (mean age 31) found 15 (0.4%) suicides, SMR 2.9.
Finnish study of 2166 breast implant women from 1970-2000 were studied (retrospectively) until 2001; there were 10 (0.4%) suicides, SMR 3. 6/10 happened in the first five years (in contrast to the U.S. study.) (Accidents here were 14, SMR 2.1. No explanation given for this.)
Canadian study: 24558 women with breast implants vs. 15893 women with other plastic surgery from 1974-1989, studied through 1997. Mean age 32. Once again, overall all-cause mortality was lower for breast implant women, except in suicide: 58 (0.24% SMR 1.73) ) suicides vs. 33 (.20%, SMR 1.55) for other plastic surgery. Like the U.S. study, women over 40 with implants carried the greatest risk of suicide (SMR 2.3), but no relationship to how far after surgery suicides occurred.
So in these studies, appx. 0.4% of breast implant patients killed themselves, representing a threefold higher risk than the general population. In some studies, the risk of suicide was increased to 1.5 times for any type of plastic surgery. At least in North America, getting implants over 40 is a risk for suicide. It goes without saying that the number of actual suicides was very small, and this could all be bunk.
All studies excluded implants for breast cancer surgery.
You may be interested in knowing that suicide is the only serious risk that has been regularly associated with breast implants-- silicone included-- and supported by real evidence, so far. Everything else is either no greater risk, or less risk. For example, there is a higher risk of lung cancer, but it most likely is related to smoking, not the implant.
The obvious next step is to see if there is a causative link between implants and suicide (likely impossible) or the implant is a clue to something else (poor self image, depression, drinking, etc.)
Something else: the stereotypical breast implant recipient (e.g. 20 year old coed in Playboy) is not really the typical recipient. The average recipient is older (mean age 34,); is more affluent; is married (75%) and has two kids; had kids at younger ages; has had abortions; and smokes. I mention this so that you have the right person in mind when you go looking for risks.
Other fun facts:
80% are cosmetic, 20% are breast cancer surgery reconstructions.
290,000+ breast implant surgeries done last year (compared to 130,000 in 1998). 25% are replacement surgeries for ruptures, pain, etc. Compare to 324k liposuction and 300k nose jobs.
10% of US women have implants. (This seems wrong.) 95% are white.
10% did it in California.
Since we're on the subject of implants and suicide, it seems to me an easy maneuver to fill breast implants with liquid explosives, puncture and mix. I am not sure why no one has tried this, actually-- or, more specifically, why no one at the TSA is looking for this as they stop to search my stupid tube of toothpaste. Not that there's any good way of checking, of course.
Score: 3 (3 votes cast)
Some guys in Georgia do a massive study and discover that doctors use medications off label. They also determine that that's bad.
The real question is not why we use them off label, but why we persist in thinking that means anything.
First, the core problem with the paper, and the entire thesis of the validity of indications, is that the definition is recursive. A drug has an indication because it was found effective for a cluster of symptoms that we have defined as a disorder. This does not necessarily make the disorder valid, and it does not preclude the drug’s efficacy elsewhere.
In other words, it tells you what it is good for, but not what it isn’t good for.
So what is the value of an indication?
Can someone clarify the basis for the arbitrary distinction between “dementia related psychosis” and any other kind of psychosis? Is there new PET data that I missed that distinguishes the two?
Similarly, to say a drug “is” an antidepressant doesn’t mean it isn’t actually an antipsychotic. For example, what is it, exactly, about Prozac that makes it not an antipsychotic? The only legitimate answer is that when tested, Prozac didn’t work in psychosis—not that an antidepressant can’t be an antipsychotic. It is an artificial hierarchy that puts “antidepressant” below/weaker than “antipsychotic.” Try the reverse: can an antipsychotic be an antidepressant? Why is that easier to believe?
Thus, categorizing a medication based on an arbitrary selection of invented indications to pursue—and then restricting its use elsewhere—may not only be bad practice, it may be outright immoral.
I do not make the accusation lightly. Consider the problem of antipsychotics for children. It is an indisputable fact that some kids respond to antipsychotics. They are not indicated in kids. But don’t think for a minute there will be any new antipsychotics indicated for kids. Who, exactly, will pursue the two double blind, placebo controlled studies necessary to get the indication? No drug company would ever assume the massive risk of such a study-- let alone two-- in kids.
And which parents will permit their child in an experimental protocol of a “toxic” antipsychotic? Rich parents? No way. The burden of testing will be undoubtedly born by the poor—and thus will come the social and racial implications of testing on poor minorities. Pharma is loathed by the public and doctors alike, and the market for the drugs in kids is (let’s face it) is effectively already penetrated. There will not be any new pediatric indications for psych meds. Not in this climate. Think this hurts Pharma? It's the kids that suffer.
Lastly, the likely most common laments to this paper will be that it will be used by insurance companies to further restrict the practice of psychiatrists. Too bad. If psychiatrists cannot be bothered to learn how medications work and their appropriate usage, then unfortunately the State must intervene. It is, after all, their money, and it is not infinite. But restricting formularies based on "approved indications" (read: nothing) is not the solution. If the problem is economic (and it is) then you need an economic solution. And you're not going to like it.
Off-Label Use of Antidepressant, Anticonvulsant, and Antipsychotic Medications Among Georgia Medicaid Enrollees in 2001.” Hua Chen, Jaxk H. Reeves, Jack E. Fincham, William K. Kennedy, Jeffrey H. Dorfman, and Bradley C. Martin 67:6 2006.
Score: 0 (0 votes cast)
Glick and friends did a small study finding-- big surprise-- stopping some of the medications when a schizophrenic is stable does not drive them into a horrible suicidal relapse. Some even got better.
Naturalistic study: 53 stable schizophrenics on antipsychotics were tapered off of antidepressants or mood stabilizers and followed for up to two years, using CGI as the measure (sigh.)
20/21 patients tapered off antidepressants were unchanged or (n=3) better; the one who did worse was an 18 yo WM on 300mg Wellbutrin.
9/12 tapered off mood stabilizers were unchanged; the three who did worse were all WM (actually, they were all white) on Lithium 600, Tegretol 1200, or Neurontin 1200.
So while that is very encouraging though only preliminary, what got me about the paper was this sentence:
"There are definitivew data in general medicine showing that combintations are much more effective than monotherapy, supported by many randomized blinded studies with a good understanding of mechanism--"
Seriously? Does he have access to some other internet than I have? What are the references?
"-- in chronic pain, for example.16"
Oh, chronic pain. I see.
FYI: Glick also did a study on suicidality, finding that the study that showed Clozaril's anti-suicide property (InterSePT) had nothing to do with the concomitant medications (mood stabilziers, antidepressants or benzos).
(16) references a morphine +/- neurontin for pain paper.
Score: 0 (0 votes cast)
Supplement to this earlier post: Ritalin Causes Cancer?
Follow-up study from a different group finds no clastogenic effect of Ritalin:
In summary, MPH was found to be non-genotoxic in all bacterial assays reported  and , in all in vitro mammalian assays conducted in compliance with current guidelines (5, present study) and in two in vivo bone-marrow micronucleus studies (5, present study).
It sounds like the El Zein study was a fluke (and thank God, too.)
But it doesn't resolve my main point: how is the average psychiatrist going to know about these findings? Is there a mechanism for new information? Is there somewhere, hell, even a blog or listserv, where psychiatrists can at least get the headlines of important articles? But that requires someone to write this all up, and I don't know anyone who has that amount of funding or time to spend on such an endeavor.
Score: 1 (1 votes cast)
This is what a $150 subscription to the NEJM gets you:
From the abstract:
Conclusions Augmentation of citalopram with either sustained-release bupropion or buspirone appears to be useful in actual clinical settings.
I can't be the only person who actually reads the articles and not just the titles, can I? There has to be at least one other person?
565 Celexa failures (i.e. did not achieve remission) from the previous STAR*D trial were then randomized to Celexa (avg dose 54mg) + Wellbutrin or Celexa + Buspar. 30 percent of the augmented patients (either Wellbutrin or Buspar) achieved remission.
From this it is concluded "These findings show that augmentation of SSRIs with either agent will result in symptom remission."
How the hell do you conclude that? Is it a mere coincidence that the remission rates of Celexa+Wellbutrin in this group were the same as Celexa alone in the other study (30%)-- and the same as almost every other monotherapy trial for every other antidepressant?
In other words, how can you be sure it was the combination of Celexa+Wellbutrin that got the patients better, and not the Wellbutrin alone? What would have happened if you had given these patients Wellbutrin but taken them off Celexa? They would have done half as well? Are you sure?
I'm not saying that it might not be true that two drugs are better than one, I'm saying that this study doesn't show that. If anything, this study actually supports switching as a strategy (i.e. fail Celexa, so switch to Wellbutrin)-- because two drugs are not proven here to be twice as good as one alone, but I can certainly prove they carry twice as many side effects and are twice as expensive.
Here we have a massive expenditure of tax dollars that will undoubtedly lead to treatment guidelines that will be clinically misleading and economically wasteful. How much did the NIMH pay for this? And for CATIE? I'm not a Pharma apologist, but what was wrong with forcing Pharma to pay for their own studies which we get to pick apart? These government sponsored studies are no better. Gee-- the generic came out on top?
Score: 2 (2 votes cast)
I don't even know what to make of this:
4041 patients show up and consent to be in a massive antidepressant trial, and almost 25% can't even score a HAM-D of 14? (7=complete cure.) Who are these people? What were they thinking?
And then of the ones who actually stay to participate (N=2876), their average HAM-D is 21? For two years?
And Celexa cures a third of these patients? Half of them in less than 6 weeks? After two years walking around HAM-D =21? Cures? Celexa? 40mg? Hello?
Remember, this is open label. These people, who presumably have been in psychiatric treatment for a long time (mean length of illness 15 years), know that they are taking 40mg of Celexa. Not a new experimental drug with a new mechanism of action. Celexa. 1/3rd get cured. After all this time.
BTW, the people who failed this Celexa study get moved into Star-D II. What is the relevance of this? Well, in this study 63% were female, 75% were white, 40% were married, 87% were high school grads or greater, 56% had jobs. It is the opposite of this demographic that is most likely not to have gotten better.
Score: 0 (0 votes cast)
As a final point on selegiline, it has long been thought that its efficacy in Parkinson's is due to the inhibition of the metabolism of dopamine. Which is true, but there may be more to it than that.
A summary of this fascinating article:
Apoptosis is different from necrosis in a fundamental way: it is signaled, rather than directly caused. In necrosis, the cell rapidly dies, the plasma membrane ruptures (with resultant irreversible ion shifts), but DNA stays intact. In apoptosis, the plasma membrane stays intact, but the cell shrinks, chromatin condenses and the DNA fragments.
Signaling is important: the genes p53, bad and bax induce apoptosis, while the Bcl family of genes promotes survival. It is now thought that apoptosis mediates substantia nigra neuronal death.
So anything that delays or stops apoptosis could be neuroprotective.
Selegiline seems to be such a drug. Since it inhibits the metabolism of dopamine, it will also prevent the formation of free radicals associated with this metabolism. But, through a mechanism totally independent of MAO-B inhibition, it protects dopamine neurons from MPTP and its metabolite MPP+. (This is a double effect: MAO-B inhibition prevents the metabolism of MPTP to the toxic MPP+; and then selegiline's other unexplained mechanism protects neurons from MPP+.) Its metabolite desmethylselegiline is actually a more potent neuroprotector; and P450 inhibitors which block the metabolism of selegiline to desmethylselegiline, also inhibit the overall neuroprotection.
The anti-apoptotic mechanism of selegiline (and the even more powerful irreversible MAO-B inhibitor rasagiline) is via glyceraldehyde-3-phosphate dehydrogenase (GAPDH). GAPDH is usually in a dimer with a stem-loop of RNA in the cytoplasm. In mitochondrial oxidation, NAD+ levels rise and then knock off the GAPDH, which then floats to the nucleus. There, GAPDH inhibits the formation of of anti-apoptotic molecules, and thus causes apoptosis. Propargylamines insert themselves into the RNA dimer and obstruct GAPDH from dislocating-- thus it cannot go to the nucleus and cause apoptosis.
Additionally, rasagiline upregulates Bcl-2 and Bcl-xl, among other anti-apoptotic molecules.
The study goes on to describe some clincial trials. Indeed, the entire May 2006 supplement in Neurology is about neuroprotection in Parkinson's-- definitely worth the read. We'll have more on this topic after some research.
Score: 0 (0 votes cast)
The title says it all: Ictal eye closure is a reliable indicator for psychogenic nonepileptic seizures.
First, the bottom line:
50/52 patients with pseudoseizures closed their eyes during their "seizure," while 152/156 of actual epileptics opened their eyes during their seizures. That's a sensitivity of 96% and a specificity of 98%. That's gold.
Now, the details:
The authors took 234 consecutive "seizure" patients, hooked them up to video EEGs and stopped their medications. There were 938 total ictal events in 221 patients. 52 (23%) had pseudoseizures, and 156 (70%) had eplieptic seizures. There was a 3:1 female predominance in the pseudoseizures, no difference in epileptics.
In the epileptics, there was rhythmic eye blinking during tonic-clonic activity, and the eyes closed after theseizure was finished.
An interesting corollary to this is when pseudoseizures occur in an actual epileptic: quoting the authors, "the simple question of eye opening or closure can help differentiate between the two types of events. One previous study found that patients with both types of events tended to have their eyes closed during PNES and open during ES.(6)"
Of course, this is isn't going to mean much to psychiatrists, apparently.
A questionnaire was put to neurologists (N=39) and psychiatrists (N=75) about the utility of video EEG in diagnosing pseudoseizures. 70% of the neurologists, but only 18% of the psychiatrists, thought that video-EEG was accurate "most of the time" in diagnosing pseudoseizures. 12% of the psychiatrists (no neurologists) said it is accurate "almost never." (3% of the psychiatrists gave no clear response. Why doesn't that surprise me?)
So here are some other differentiating symptoms:
In seizure patients, there is a crescendo-decrescendo quality to the spike-wave frequencies on EEG. In pseudoseizure patients, however, the frequency is the same from beginning to end, and it comes on suddenly as if a switch was flicked. The spike-wave on EEG is actually motion artifact, and typically runs around 4 Hz, while epileptics have frequencies that vary between 4-25 Hz.
In a study of 40 pseudoseizure vs. 40 matched normal controls, the pseudoseizure group had more left handers, reduced strength and speed in both dominant and non-dominant hands, and reduction in the dominant hand advantage in strength and speed (i.e. both hands performed equally badly-- the dominant hand wasn't a little better.) Interestingly and importantly, the authors did not think this was due to faking or psychological factors, but felt that it was due to actual neurologic impariment in bilateral pathways: 65% had had a closed head injury, 27% had had physical abuse, and 17% had had a history of substance abuse. 40% had an IQ less than 90!
A study in epileptics vs. pseudoseizure patients trying to determine how long after admission to a video EEG unit it takes for patients to have events (answer: 88% had it on day 1) also found that urinary incontinence, focal neurologic exams, and tongue biting were about the same in both groups. But more epileptics had events less than one minute, and more pseudoseizures lasted > 5 minutes (and very few (13%) lasted less than one minute.)
Slightly different results were found in another study: 11/28 pseudoseizure patients had them on day 1, but 9/28 needed an average of 5 days. 19/28 had an induced pseudoseizure to IV saline challenge within 3-7 minutes. But still-- 3 days should be enough for most patients.
And alexithymia is of no value. It is found more often in epileptics and pseudoseizure patients equally, though still more than expected in the community. A larger, controlled trial had found a similar inability for alexithymia to differentiate: alexithymia was very common in epileptics (76%) and pseudoseizures (90%). Thus, it is likely that alexithymia is a coping strategy, and not an independent trait.
Addendum 11/5/06: I did find an interesting (Greek) study finding an excess of seizures on full moons (34% vs. about 21% for the other phases.) Importantly (and in contrast to suggestions by other studies) these were not pseudoseizures, because all patients were monitored. The authors speculate either electromagnetic/gravitational effects (hey, it could happen) or an interaction between the intrinsic seizure threshold and the environment (i.e. you can change you rown threshold.)
Score: 3 (3 votes cast)
Ironically, while selegiline can't be mixed with cheese or Prozac, it can be mixed with methamphetamine and cocaine. A small placebo controlled study found that concomittant administration of methamphetamine (15 or 30mg) with selegiline (oral) caused no EKG, lab, or vital sign changes. The clearance and half-life of methamphetamine was also unchanged. Similarly, 10mg PO can be safely mixed with up to 40mg cocaine, should you be into that. An earlier study found that 10mg/d could reduce the high of cocaine and reduced the activity of the amygdala (as defined as glucose utilization on PET scan) and not caus any negative interactions.
If that's not good enough for you, a study using the selegiline patch 20mg/d in 12 cocaine addicts found that heart rate and blood pressure were lower on selegiline at baseline, and were increased less after 40mg cocaine IV. It caused a slightly less subjective feeling of highness. In case this is not amazing to you, let me point out that as an MAO-B inhibitor, selegiline should increase dopamine levels-- and you should feel more high. But the opposite happened. (Why? Because selegiline already raises dopamine, so the effect of cocaine is less (because there's less dopamine left to increase) and so it feels less fun?)
It had no effect on cocaine pharmcokinetics or dynamics, and did not alter cocaine's effect on prolactin (suppression) or growth hormone (increase.)
A larger, 300 person double blind trial of patch versus placebo (done by the same authors) found no difference for the treatment of cocaine dependence-- but, importantly, there weren't any adverse effects of mixing the two, either.
While not recommended, it appears the patch is at least safe with your addict populations.
Score: 1 (1 votes cast)
You've probably already read quite a bit about the selegiline (L-deprenyl) patch (right?), but these four (five) points may frame the information more usefully.
1a. All the oral MAOIs you are used to (phenelzine, moclobemide, trancylpromine, etc) are either nonselective (both MAO-A and MAO-B inhibitors) or are selective MAO-A inhibitors.
1b. MAO-A inhibition is needed for antidepressant effect.
2. MAO-A metabolizes serotonin, norepinephrine, dopamine, and tyramine.
3. MAO-A in the gut is what metabolizes tyramine. Inhibition of the gut MAO-A allows tyramine to enter the circulation unmetabolized-- thus releasing norepinephrine and causing hypertensive crises.
4. Oral selegiline (pill) is an MAO-B inhibitor at doses less than 10mg/d.
In other words, a) selegiline requires no dietary restriction below 10mg/d (because it doesn't affect MAO-A in the gut) and b) it doesn't work below 10mg/d (for depression; MAO-B metabolizes dopmaine, so selegiline will still be good for Parkinson's at small doses.)
5. Above 10mg/d, selegiline is nonselective (thus MAO-A and B inhibition). Thus, a) it should work; b) it will require dietary restrictions.
One interesting point: selegiline is rapidly metabolized (first pass) to desmethylselegiline, l-amphetamine, and l-methamphetamine. (1)
The point of the patch is that it bypasses the first pass metabolism (you don't eat it) so you get much higher concentrations of drug into the CNS and few metabolites. Also, much less goes to the intestinal MAOs, so you get both MAO-A and B inhibition in the brain, but less of the MAO-A in the intestine. So even if you use doses greater than 10mg/d, you (probably) don't need dietary restrictions. (NB: even though I can't find any studies clearly linking the risk (most find it safe up to 20mg) the PI still says to avoid tyamine foods above 9mg/d.)
Part 2: Efficacy
Above, I made the outrageous statement, "it doesn't work below 10mg/d." What's really outrageous is that I couldn't find any evidence that it worked above 10mg/d, either.
Here's a typical example: a 2003 study of 289 patients, double blinded, placebo controlled, of selegiline patch 20mg/d (keep in mind, the starting dose is 6mg/d) vs. placebo patch. Though the paper finds "statistical superiority" of the patch over placebo, it took 8 weeks to get a 2-3 point difference on the MADRS or HAMD-28. (For context: the HAMD-28 has 28 questions with ratings from 0-4. So three points difference could be three points on one question, or one point on three questions...) It never beat placebo on the HAMD-17. (To the author's credit, he does not hide this and is upfront that these were "modest" differences.)
Contrast that with the first clinical trial of the selegiline patch (done, astonishingly, by the same author): superior efficacy on all three scales. But, of course, even at 20mg/d, it's really not that superior:
Maybe 2-4 points, max? I grant more people responded to the patch (as defined by reduction of 50% on the HAM score)-- but it was 15% more people, and, well, come on on...
Just to make the point, a 67 person, multicenter, double blind, placebo controlled study tested oral selegiline's efficacy in schizophrenics, and found improvement vs. placebo in "negative symptoms," as defined by the Scale for the Assessment of Negative Symptoms (SANS). Troublingly, "improvement" means one point difference:
And not much happened for depression (HAM-D) either.
Someone, somewhere is going to accuse me of only showing weak studies and omitting all the studies that showed it worked well. Okay. Here is the last known study:
The only other patch study was a 321 person, 1 year long placebo controlled study, found that while twice as many people dropped out for side effects (13.2% vs. 6.7%), twice as many on placebo relapsed at 6 months (16.8% selegiline relapse vs. 29.4% placebo). Interestingly, at one year the relapse rates for both drug and placebo were identical-- in other words, all relapses occurred in the first 6 months, none in the second 6 months.
Score: 1 (1 votes cast)
For more articles check out the Archives Web page ››