April 3, 2006

Nature Weighs in On What Is True

and turn out to be wrong.

There's been something of a controversy raging over the best place to get accurate information.

Specifically, there's a free, user-written encyclopedia called wikipedia at http://www.wikipedia.org that competes with the
Encyclopedia Britannica. The idea is that anyone who uses wikipedia can edit any story. So if you happen to be reading an article that has an error in it (for example, if it says the Constitution was ratified in 1798) you can correct it with a few clicks (e.g. you change it to 1789). Aside from controversial topics (where articles are edited constantly to favor one
opinion or the other), the "hard facts" articles on science, culture or history are fairly decent. Or so they seem at first glance.

The controversy is this: Britannica's editor in chief went on record (in a newspaper article I can't find) stating that Britannica is a better, more reliable source of "knowledge" because it's a closed controlled editing environment, where articles are researched, edited, and reviewed internally by academics who are experts in their respective fields.

Wikipedia responded saying it doesn't need all that editorial oversight because any error in an article is corrected relatively quickly by an expert in the field.

The essence of the argument is "top-down" (Britannica) vs. grass-roots/bottom-up (wikipedia), or,  to put it more succintly, does the existence of a gatekeeper for knowledge improve the quality, accuracy, and veracity of knowledge?

The debate matters for two reasons: (1) at some point people have to agree on the basic facts of whatever they are talking about, and (2) there needs to be a place where you can find the core true facts about any subject.

So anyway, the medical journal Nature decided to compare the two sources of knowledge:

http://www.nature.com/news/2005/051212/full/438900a.html

Now you may ask what the hell business is it of Nature's (a quasi-medical journal) to do this (review the accuracy of encyclopedias), but that's my point and I'll get to that in a second.

Anyway, surprise surprise, Nature says that in the case of science articles, wikipedia is better. This was not unexpected - wikipedia claimed all along that it amounted to enabling peer-review of its articles by readers, and Nature, of course, is all about peer-review.

Britannica responded, finding errors in Nature's methodology (warning: pdf ahead, but it's worth reading if you think for a second Nature should be trusted to do anything):

http://corporate.britannica.com/britannica_nature_response.pdf

concluding that the study was bogus, and that Britannica had far fewer errors and omissions than Nature claimed.

 

What interests me here is not the accuracy of Wikipedia vs. Britannica, but why Nature feels it is in any position to examine this.

Here is Nature's response to Britannica's criticisms:


http://www.nature.com/nature/journal/v440/n7084/full/440582b.html

Here's the line to focus on:

"Britannica complains that we did not check the errors that our reviewers identified...but there is a more important point to make. Our reviewers may have made some mistakes — we have been open about our methodology and never claimed otherwise — but the entries they reviewed were blinded: they did not know which entry came from Wikipedia and which from Britannica."

For the record, Nature says this is how the test was
conducted:

"Each pair of entries was sent to a relevant expert
for peer review. The reviewers, who were not told
which article came from which encyclopaedia, were
asked to look for three types of inaccuracy: factual
errors, critical omissions and misleading statements. 42 useable reviews were returned."

And this, my scientician friends, is why medicine isn't a science. Nature is saying that its methodology is sound because the entries they reviewed were blinded - BUT WHO ARE THE PEER REVIEWING EXPERTS, WHO SELECTED THE ENTRIES, AND ACCORDING TO WHAT CRITERIA?

 

You cannot excerpt an article describing something and then test the excerpt for omissions. Furthermore, the excerpting is not blinded, and the person excerpting things may have a different opinion of what can be safely left out than the person doing the review.

Nature's mistake is assuming that the expert is always right. If the expert disagrees with Britannica, then Britannica is wrong.  You should be able to test the accuracy of an entire encyclopedia article *by giving it to multiple experts*. Not the other way around, multiple articles to one expert. The hypothesis is "do experts think the article is correct", and you test it by find the percentage of experts that think it is/isn't. 

What is truly ironic is that while Nature likes to hold itself out as an open source for medical knowledge (and thus more like Wikipedia), it is in fact a gatekeeper of knowledge like Britannica. When Nature publishes an article, the belief of the scientific community is that the article is correct *because it's in Nature*. But Nature is the journal of statistical regression sciences - medicine, global warming, etc., i.e. disciplines where there is no right answer or it's impossible to know the right answer because you are observing only a small percentage of all the variables being affected.  It tests associations, not causality. 

Keep this in mind when a journal like Nature also makes policy proclamations ("global warming needs to be stopped") or creates artificial hierarchies by its coverage (substantially more articles on HIV than malaria, so HIV becomes more "important" than malaria, etc.)