November 6, 2008

The CIA Has The Same Problem Medicine Does

After some dozen years' immersion in intelligence, I still find myself reacting uncomfortably to its rather cavalier disregard for the footnote.


Both the CIA and medicine have little patience for regular reexaminations of primary sources.

("Footnote" to the CIA does not mean that referencing a journal or book, but refers rather to the actual source (guy) of the information, when, where, under what conditions, etc.  In this way, footnote is more analogous to individual data points.)

John Alexander (not his real name) writing in the CIA journal Studies In Intelligence, A Modest Proposal for a Revolution in Intelligence Doctrine:


For example, and I find this quite ironic, the higher the level of the intelligence product, the less complete is its visible documentation. In other words, the more serious its import and the closer it is to the influential official who will act upon it, the slighter is its overt back-up.

At the lowest level, of course, is the raw intelligence report. This report is generally extraordinarily well evaluated and supported. No scholar could really, within the normal limits of national security, ask much more....The user of this kind of report can easily and effectively apply the canons of evidence in evaluating and testing the information.

As in medicine-- at the lowest level we have the data driven studies, and that data is right there, available to all.  The Methods and Procedures are carefully described.

But as we move up the ladder of intelligence reports the documentation gets sparser. The NIS (National Intelligence Summary), to use a well-known example, is in effect a scholarly monograph, digesting a great multitude of raw reports. Its total documentation usually consists of a single, very brief paragraph commenting on the general adequacy of the source material.

And then we have a review article.  While the studies reviewed are referenced, the data in those studies is not rehashed.  With statements like, "in this well designed trial..." we are left hoping the author actually read the article he is referencing, and critically examined its data, and didn't just cut and past from the abstract.

Next up the ladder is our analogous "Expert Guidelines:"

At the more exalted level of the NIE (National Intelligence Estimate), documentation even in the generalized form of comments on sources has usually disappeared altogether. One is forced to rely on the shadings given to "possibly," "probably," and "likely" and on other verbal devices for clues as to the quantity and quality of the basic source data. These examples from the NIS and NIE are paralleled in a great many other publications of similar refinement. One may admire the exquisite nuances and marvel at what a burden of knowledge and implicit validation the compressed language of a finished "appreciation" can be forced to carry, but one cannot help being concerned about the conclusions. Upon what foundations do those clever statements rest?
One can only speculate.

II.

It's going to be obvious to some that rehashing the primary data points, over and over, all the way up to the "exalted level" of treatment guidelines is going to be impractical.  What we need to do is trust that the intermediary authors and experts are doing it. No one expects Bush to look at the sat images himself; but perhaps Tenet should.  etc.  Well, there's a problem with this as well:

Another situation that troubles me is the vast array of editors and reviewers ...to which an intelligence product is subjected before it is finally approved for publication.... I recognize that many of these reviewers are highly talented, experienced individuals....But what basis do they have for their exalted "substantive" review?

Translation:

these reviewers have not generally been systematically exposed to the current take of raw data. Their knowledge of current intelligence events is based on hurried reading of generalized intelligence reports or on sporadic attendance at selected briefings. They are not aware in any particular instance--nor should they be--in any real detail of the material actually available on a particular subject.

Medicine's experts rarely have much recent experience "on the ground."  They don't treat raw patients (as opposed to clinical trial patients); their knowledge of other people's studies is no more complete or penetrating than anyone else's-- but because they are experts in their field, they are able to put their imprint on other people's work.  The three idiots who review a paper on, say, Zyprexa induced diabetes, are experts in psychosis, but none of them have more than intern level training in diabetes or in structural pharmacology. This is why there are so many "experts" talking about diabetes, but none have told us why it occurs.

And so, once a paper fits the bias of the peer reviewers, what actually happens in peer review?

As a result much high-level review... has consisted of the discovery of occasional typographical errors, small inconsistencies in numbers cited in different paragraphs...
The author notes that even this flawed system has worked surprisingly well; and there are fields of medicine about which the same can be said (surgery); but the reason it works there is because there is a real and visible consequence.  Were you wrong?  People die.

Psychiatry isn't like that; you can be wrong for decades and no one notices.  People die, certainly, but no one sees the link back to the practice.  Couple that with the-- academic nepotism?-- or at minimum groupthink which is the formal and explicit basis of all psychiatric practice, and it becomes evident that something has to change. 

But nothing will.  This CIA article was written in 1964.

Related articles:

Ten Things Wrong With Medical Journals
 
What's Wrong With Research In Psychiatry?

Are Drug Companies Hiding Negative Studies?

-----

and still searching for Diggs, Reddits, and donations-- the revenue generators of this blog



 










6 Comments