November 6, 2008

The CIA Has The Same Problem Medicine Does

After some dozen years' immersion in intelligence, I still find myself reacting uncomfortably to its rather cavalier disregard for the footnote.


Both the CIA and medicine have little patience for regular reexaminations of primary sources.

("Footnote" to the CIA does not mean that referencing a journal or book, but refers rather to the actual source (guy) of the information, when, where, under what conditions, etc.  In this way, footnote is more analogous to individual data points.)

John Alexander (not his real name) writing in the CIA journal Studies In Intelligence, A Modest Proposal for a Revolution in Intelligence Doctrine:


For example, and I find this quite ironic, the higher the level of the intelligence product, the less complete is its visible documentation. In other words, the more serious its import and the closer it is to the influential official who will act upon it, the slighter is its overt back-up.

At the lowest level, of course, is the raw intelligence report. This report is generally extraordinarily well evaluated and supported. No scholar could really, within the normal limits of national security, ask much more....The user of this kind of report can easily and effectively apply the canons of evidence in evaluating and testing the information.

As in medicine-- at the lowest level we have the data driven studies, and that data is right there, available to all.  The Methods and Procedures are carefully described.

But as we move up the ladder of intelligence reports the documentation gets sparser. The NIS (National Intelligence Summary), to use a well-known example, is in effect a scholarly monograph, digesting a great multitude of raw reports. Its total documentation usually consists of a single, very brief paragraph commenting on the general adequacy of the source material.

And then we have a review article.  While the studies reviewed are referenced, the data in those studies is not rehashed.  With statements like, "in this well designed trial..." we are left hoping the author actually read the article he is referencing, and critically examined its data, and didn't just cut and past from the abstract.

Next up the ladder is our analogous "Expert Guidelines:"

At the more exalted level of the NIE (National Intelligence Estimate), documentation even in the generalized form of comments on sources has usually disappeared altogether. One is forced to rely on the shadings given to "possibly," "probably," and "likely" and on other verbal devices for clues as to the quantity and quality of the basic source data. These examples from the NIS and NIE are paralleled in a great many other publications of similar refinement. One may admire the exquisite nuances and marvel at what a burden of knowledge and implicit validation the compressed language of a finished "appreciation" can be forced to carry, but one cannot help being concerned about the conclusions. Upon what foundations do those clever statements rest?
One can only speculate.

II.

It's going to be obvious to some that rehashing the primary data points, over and over, all the way up to the "exalted level" of treatment guidelines is going to be impractical.  What we need to do is trust that the intermediary authors and experts are doing it. No one expects Bush to look at the sat images himself; but perhaps Tenet should.  etc.  Well, there's a problem with this as well:

Another situation that troubles me is the vast array of editors and reviewers ...to which an intelligence product is subjected before it is finally approved for publication.... I recognize that many of these reviewers are highly talented, experienced individuals....But what basis do they have for their exalted "substantive" review?

Translation:

these reviewers have not generally been systematically exposed to the current take of raw data. Their knowledge of current intelligence events is based on hurried reading of generalized intelligence reports or on sporadic attendance at selected briefings. They are not aware in any particular instance--nor should they be--in any real detail of the material actually available on a particular subject.

Medicine's experts rarely have much recent experience "on the ground."  They don't treat raw patients (as opposed to clinical trial patients); their knowledge of other people's studies is no more complete or penetrating than anyone else's-- but because they are experts in their field, they are able to put their imprint on other people's work.  The three idiots who review a paper on, say, Zyprexa induced diabetes, are experts in psychosis, but none of them have more than intern level training in diabetes or in structural pharmacology. This is why there are so many "experts" talking about diabetes, but none have told us why it occurs.

And so, once a paper fits the bias of the peer reviewers, what actually happens in peer review?

As a result much high-level review... has consisted of the discovery of occasional typographical errors, small inconsistencies in numbers cited in different paragraphs...
The author notes that even this flawed system has worked surprisingly well; and there are fields of medicine about which the same can be said (surgery); but the reason it works there is because there is a real and visible consequence.  Were you wrong?  People die.

Psychiatry isn't like that; you can be wrong for decades and no one notices.  People die, certainly, but no one sees the link back to the practice.  Couple that with the-- academic nepotism?-- or at minimum groupthink which is the formal and explicit basis of all psychiatric practice, and it becomes evident that something has to change. 

But nothing will.  This CIA article was written in 1964.

Related articles:

Ten Things Wrong With Medical Journals
 
What's Wrong With Research In Psychiatry?

Are Drug Companies Hiding Negative Studies?

-----

and still searching for Diggs, Reddits, and donations-- the revenue generators of this blog



 










Comments

In the end, it's really jus... (Below threshold)

November 7, 2008 12:57 AM | Posted by xon: | Reply

In the end, it's really just dogmatism, with a light dusting of credentialism thrown in for garnish.

One thing I find interesting is that analysts tend to understate their influence. They do this because Intelligence is what people who 'know' say to people who 'do'. To openly claim that they actually know better than the executive would be to challenge the executive's legitimacy. So, they content themselves to proffer the accumulated wisdom in a way that, above all else, can never be actually proven incorrect, using very elaborate and subtle cues to show their deference to the doer (decisionater? ? ?).

Conversely, medicos tend to overstate their influence. I'm guessing they do this because they make more money that way. I actually had a psychiatrist tell me, after prescribing anti-depressants for five years without so much a inquiring into therapy, "We don't really do that. We just manage meds."

If I could make $1000 an hour just telling people to take reading lessons from 'someone else', I guess I wouldn't be too invested in their actually learning how to read either. . .

Vote up Vote down Report this comment Score: 1 (1 votes cast)
but the reason it ... (Below threshold)

November 7, 2008 5:53 AM | Posted by David: | Reply

but the reason it works there is because there is a real and visible consequence. Were you wrong? People die.

Psychiatry isn't like that; you can be wrong for decades and no one notices. People die, certainly, but no one sees the link back to the practice.

Then there's the difficulty of what metric you're going to use to define a "successful" outcome and the inevitable "dumbing down" of the very concept of success.

Consider the plight of our current educational system and the need for standardized testing. Supposedly this addresses whether or not grade schools are effective. In real life ... instead of learning, schools adapt their practices in order for students to pass tests and schools to continue receiving federal and state funding.

Vote up Vote down Report this comment Score: 1 (1 votes cast)
Many, many big organization... (Below threshold)

November 7, 2008 12:45 PM | Posted by spriteless: | Reply

Many, many big organizations have this. I know the military has incredibly efficient information channels for old style warfare, but actively edits out the stuff that would be useful for reconstruction.

Vote up Vote down Report this comment Score: 0 (0 votes cast)
"analysts tend to understat... (Below threshold)

November 8, 2008 12:04 PM | Posted by MedsVsTherapy: | Reply

"analysts tend to understate their influence. They do this because Intelligence is what people who 'know' say to people who 'do'. To openly claim that they actually know better than the executive would be to challenge the executive's legitimacy. So, they content themselves to proffer the accumulated wisdom in a way that, above all else, can never be actually proven incorrect, using very elaborate and subtle cues to show their deference to the doer (decisionater? ? ?)."

This is an awesome observation.
This phenomenon needs a name, similar to the "Peter Principle" for the way personnel rise to their level of incompetence.

I myself have done many investigations into various health care topics to get to the heart of the evidence. Probably, nine times out of ten, the "paper trail" is either insufficient (i.e., the string of references end up at a citiation to some conference presentation poster, or when you go to contact the "corresponding author," they cannot back up what they put into print. I was seeking info on a measure -I think it was for side-effects from antihypertensives, or was for a measure of client satisfaction with HTN meds - the "corresponding author" was a front for a pharmaceutical company; the "corresponding auothr" referred me to the pharmaceutical research group. Their various email addresses were all non-functional.

I went to track down the empirical evidence regarding the various remedies for morning sickness: ginger, phenergan (sp), accupressure, etc. I would need to make up new range of "quality" since the descriptors used by U. S. Preventive Services Task Force or Cachrane Reviews have a floor effect: no descriptors low enough to describe "sketcky," wispy," or "a scant bit better than rumor."

Thank God I get a chance to teach on this topic. This fall, I am frustrating the heck out of my students with their writing assignment: find evidence for some health care claim. Any claim.

Vote up Vote down Report this comment Score: 1 (1 votes cast)
Would you expect that the p... (Below threshold)

November 8, 2008 2:39 PM | Posted by Christo78: | Reply

Would you expect that the problems with peer review, publication bias, marketing that masquerades as evidence based medicine, and the dominance of so-called "experts" who determine everything from the direction of future research to the (mis)interpretation and dissemination of existing literature in accordance with their own personal agendas cause a riot among doctors? You think they are too busy making money? How about medical students?

I am kind of heart broken. Few days ago, I delivered the last lecture of a psychiatry class. The first class I have ever taught... At the end I invited my students to give me some feedback. Their sole criticism can be summarized as follows: “It would have been more helpful if you had stuck to the syllabus and told us what was important and what was not. It felt too disorganized sometimes!”

Was it too much for a group of medical students to consider the possibility that a significant percentage of what is being taught them as “important” is simply trash? That their syllabus is a collection of (mostly) meaningless, biased, and irrelevant articles that serve to nothing more than justifying their authors’ academic posts? That basic outcomes in Psychiatry probably have not changed since the late 50ies? Did they feel threatened when I argued, with clinical examples, that psychiatric disease constructs are being abused to address societal problems? These are medical students in their mid to late twenties who will begin to practice medicine in about two years! What can be more important than thinking about these things?

Maybe I was not a good enough teacher. I saw apathy in their eyes and failed to do anything about it. That is all.

Vote up Vote down Report this comment Score: 1 (1 votes cast)
No, you're not a bad teach... (Below threshold)

November 8, 2008 7:26 PM | Posted, in reply to Christo78's comment, by xon: | Reply

No, you're not a bad teacher. You're an excellent one. But they don't need (through no fault of their own; except for intellectual apathy, which is hardly an aberration these days) a teacher. Neither you, nor their other teachers, are going to pay them. Experts are going to pay them. Politicians are going to pay them. Ignorants are going to pay them.

The skills they need for their 'professional success' are not reasoning and insight. What they need for their professional success is the 'proper' answers to regurgitate to the gatekeepers of the profession, and the proper etiquette to demonstrate subordination to the current dominants in the market for their credentials.

Knowledge is becoming less and less important, or valued today. Dogma is becoming more and more valuable. Dark Ages Here We Come.

Vote up Vote down Report this comment Score: 1 (1 votes cast)