January 17, 2011

This Time It's ESP

Journal_of_Personality_and_Social_Psychology_cover.jpgI knew this was going to happen


Let me clarify one point about the MMR/Wakefield controversy.  The fact that Wakefield faked his data does not prove there's no link.  Right?  I don't think there's a link, of course, but what do I know?  I'm a pirate.

There's a controversy about a paper published in the Journal of Personality and Social Psychology, a highly reputable academic journal.  The paper is about ESP, which is the controversy.


Either (NYT):

Journal's Paper On ESP Expected to Prompt Outrage

One of psychology's most respected journals has agreed to publish a paper presenting what its author describes as strong evidence for extrasensory perception... The decision may delight believers in so-called paranormal events, but it is already mortifying scientists.

Or (NPR):

Could It Be?  Spooky Experiments That See The Future

One of the most respected, senior and widely published professors of psychology, Daryl Bem of Cornell, has just published an article that suggests that people -- ordinary people -- can be altered by experiences they haven't had yet. Time, he suggests, is leaking. The Future has slipped, unannounced, into the Present. And he thinks he can prove it.

All depends on whether you think "scientists don't know everything, man!" or "scientists are fraudsters, man!"

II.



The experiments are of the type: two groups take a test; one group is then shown the answers, the other group isn't.  The ones who see the answers after the test did better on the test.  Weird, right?

The paper describes nine unusual lab experiments... testing the ability of college students to accurately sense random events, like whether a computer program will flash a photograph on the left or right side of its screen. The studies include more than 1,000 subjects. Some scientists say the report deserves to be published, in the name of open inquiry; others insist that its acceptance only accentuates fundamental flaws in the evaluation and peer review of research in the social sciences.

"It's craziness, pure craziness. I can't believe a major journal is allowing this work in," Ray Hyman, an emeritus professor of psychology at the University Oregon and longtime critic of ESP research, said. "I think it's just an embarrassment for the entire field."

Hyman is right but for the wrong reasons, for self-serving reasons, which makes him wrong.   And the NYT assertion that this "accentuates fundamental flaws in the peer review of research in the social sciences" is also wrong, wrong, wrong, wrong.


There's a subtlety to the experiments that is indeed explicit in the articles but is easily overlooked, so I'll quote from the study:

From the participants' point of view, this procedure appears to test for clairvoyance. That is, they were told that a picture was hidden behind one of the curtains and their challenge was to guess correctly which curtain concealed the picture. In fact, however, neither the picture itself nor its left/right position was determined until after the participant recorded his or her guess, making the procedure a test of detecting a future event, that is, a test of precognition. 
This is the part that's important.  If it was a study of clairvoyance, well, could there be a possible physical explanation?  Perhaps.  But time travel?

Which is why anyone who says this study  "doesn't belong in a scientific journal" is wrong.  It doesn't belong in a psychology journal: this is an experiment about the laws of physics, not the laws of psychology. 

And so to say that  it is a failure of peer review-- like they did with Wakefield--  also misses the point.   Bem's peers are in absolutely no position to review this.  This study is better reviewed by physicists.  Bem himself makes an explicit case for quantum entanglement!  So notwithstanding my own rants about peer review,

"Four reviewers made comments on the manuscript," [said the journal's editor] "and these are very trusted people."

Trusted though they may be, they are not experts in the field being studied. 

All four decided that the paper met the journal's editorial standards, [the editor] added, even though "there was no mechanism by which we could understand the results."

Exactly.  So you should have sent it to the physicists.  You know, the ones who work a building over in the same university that you do.  That was the whole reason for universities, right? 

No, I'm a dummy.  The purpose of universities is to suck up Stafford loan money.  And the purpose of journals is to mark territory, more money in that, like a corporation that spins off a subsidiary.  NO CROSS SCIENTIFIC DISCUSSION ALLOWED IN SCIENCE, EVER, EXCEPT IN SCIENCE, NATURE, AND THE POPULAR PRESS.
 

II.

So I'll be explicit: peer review may have problems, but the entire way we evaluate science is territorial and stuck in the 19th century, which, ironically, was a time when scientists were much less territorial and practiced multiple disciplines.

With a data feed to select articles on "psychiatry," what do I need a psychiatry journal for?  If you wanted to be brain scientists, why do you have separate journals form other brain scientists?

How awesome would it be to have an astrophysics grad student or a PhD economist or a dancer or anyone of the mofos from metafilter to come look at a psychiatric clinical trial and discuss it?  You wouldn't have to pay them, they would think it was fun-- what, you think I'm blogging because of the millions of dollars in donations I get from Denmark and now the pacific northwest (6x in a month-- did you guys find a work around for Cybernanny?)

And no, not just the paper data; why not video the whole process and upload it? I have a phone that shoots HD 720p; good enough for the optical demands of amateur porn, why not good enough for science?

If researchers published their paper along with all of the primary source data on a web page, and let the public wikipedia it up, we might discover that a study was crap but we might also learn something about how studies become crap, the biases or hidden pitfalls, etc.  (No, "available upon request" does not cut it.) 

Instead, we have a near idiotic controversy occurring in self-imposed darkness.   "It's a big butt!"   "No, it's a big leg!"   "No, it's a weird snaky-thing!"   "Well whatever it is, don't turn on the light, let's just keep guessing-- this way we can all get publications out of it."






Comments

It's actually statisticians... (Below threshold)

January 17, 2011 11:51 AM | Posted by gwern: | Reply

It's actually statisticians who this sort of statistical oddity should be sent to. And if you read the NYT articles thoroughly, you'll see they promptly tore apart the ESP study.

Vote up Vote down Report this comment Score: 12 (14 votes cast)
TLP: "If you wanted to be ... (Below threshold)

January 17, 2011 12:07 PM | Posted by Gary: | Reply

TLP: "If you wanted to be brain scientists, why do you have separate journals form other brain scientists?"

If you wanted to be brain scientists, why not become neurologists? A mind scientist? Psychology. We have mind and brain, and nothing else. Psychiatry is a half-assed combination of psychology and neurology that by definition can provided no original research. It's an empty "specialty". There can be no "psychiatric advancements" in research.

Vote up Vote down Report this comment Score: -6 (32 votes cast)
From the NYT article: "Perh... (Below threshold)

January 17, 2011 1:39 PM | Posted by Laura: | Reply

From the NYT article: "Perhaps more important, none were topflight statisticians. “The problem was that this paper was treated like any other,” said an editor at the journal, Laura King, a psychologist at the University of Missouri. “And it wasn’t.”"

So... as long as journal articles uphold our belief systems, they don't deserve as much scrutiny as this paper?

Vote up Vote down Report this comment Score: 12 (12 votes cast)
I'm wondering why they told... (Below threshold)

January 17, 2011 1:40 PM | Posted by BHE: | Reply

I'm wondering why they told them there was a picture behind one of the curtains in the first place, instead of just asking them to predict which curtain it *would* be behind. Are they saying you can only predict the future if you don't know you're predicting the future?

From that standpoint you could ask the question--who was being studied? The people doing the guessing, or the people placing the pictures behind the curtain? Maybe this isn't so much a test of predicting the future as it is a test of the researchers' clairvoyance in guessing what the subjects marked on their sheets.

And even then, after seeing a few of these sheets, don't you think the researchers would have an idea (even subconsciously) about the kinds of patterns of guesses their subjects make? And might therefore place the pictures behind the curtains in a way that was less than random?

Of course, without reading the methodology I have no idea. Which goes to your point about putting this all up on the internet for us laypersons to dissect.

Vote up Vote down Report this comment Score: 7 (9 votes cast)
Extraordinary claims requir... (Below threshold)

January 17, 2011 1:45 PM | Posted, in reply to Laura's comment, by gwern: | Reply

Extraordinary claims require extraordinary evidence. If a man tells me he left his umbrella at home, I will take him at his word. If he tells me he is actually a fish and we are all fishes, I will disbelieve him and demand to see his fins.

Vote up Vote down Report this comment Score: 22 (26 votes cast)
Open source peer review is ... (Below threshold)

January 17, 2011 3:04 PM | Posted by F: | Reply

Open source peer review is a great idea in theory that is maddeningly difficult to implement in practice. The most successful example is the PLoS journals, which seem to work quite well.

There is a common problem that peer review is specifically designed to avoid. There are often results that seem strange or unexplainable to newcomers to a field that are actually well-known problems of experimental design (i.e. you're not testing what you think you're testing). This is where the experts come in; they have seen these errors before and can point them out before they propagate.

Another way to put this is that all scientific papers tell a story. That story may be totally plausible but, like most stories, does not include every single detail. It usually takes an expert to ask the right questions about those omitted details to really determine whether the story is true.

Of course, the oft-cited counter to this is that newcomers are less bound by accepted dogma and so they are more likely to make paradigm-changing contributions. This is true, but exceedingly uncommon.

Vote up Vote down Report this comment Score: 7 (9 votes cast)
Well actually yes... a bit.... (Below threshold)

January 17, 2011 7:53 PM | Posted, in reply to Laura's comment, by Anonymous: | Reply

Well actually yes... a bit.

If a discovery majorly conflicts with existing knowledge it does deserve more careful scrutiny. After all, accepting this new information means throwing out lots of old information, which presumably was reasonably well backed.

Vote up Vote down Report this comment Score: 5 (5 votes cast)
What I find most interestin... (Below threshold)

January 18, 2011 1:58 AM | Posted by KKB: | Reply

What I find most interesting about this is that we're only psychic for sex, and only just a little bit . . . fifty three percent versus perfectly neutral fifty percent. What that means is that if this is a true finding, then all we're doing is "discovering" something commonplace that we already all do all the time. And in our own daily experiences, we already know that we're not psychic in the way of sci fi movies. We're just our everyday ordinary selves being described in very interesting way by this study.

Though I consider myself a scientifically minded person, in my superstitious "hunch" mind I feel that if it were somehow possible to ask for things I wouldn't want to waste my wishes on superficial desires. I'd want to save up for when it counted. Clearly based on a religion style god or genie who is listening to my pleas . . .

Now if these findings are correct, I plan to change my behavior. If it's true that we can very slightly hear the future and perhaps even very slightly affect the future, then I'm going to start practicing my retrocausality "muscles," in the hopes of being able to hone these mysterious and slightly effective skills so that I can attempt to use them when it counts. Why not?

Vote up Vote down Report this comment Score: 3 (3 votes cast)
I participated in this stud... (Below threshold)

January 18, 2011 4:43 AM | Posted, in reply to BHE's comment, by Phil: | Reply

I participated in this study at Cornell as a subject and can tell you firsthand that, at least in the particular flavor of the experiment that I participated in, researcher bias couldn't have contributed.

When I arrived at the lab, the grad student running things simply told me that this would be a test of ESP, and after a two-minute debrief, I was walked to an adjacent room with an ancient IBM computer. The grad student and I were physically separated.

After a cheesey meditative period during which I was to stare at a pixelated JPEG of the Hubble Ultra Deep Field accompanied by treacherous 16-bit New Age music, I was to rate "objectively happy or sad photos". This could be something like a girl thoroughly enjoying a lollipop or a grief-stricken man at a cemetery. In my opinion, the experimenters did indeed manage to select fairly objective photos.

I was expected to judge the image by pushing either the "happy button" ('/' with a :) sticker) or the "sad button" ('z' with a :( sticker). My selection, I had been told, would be "followed by a brief subliminal image". IIRC, the flash of the second image occurred a split second after I had judged the first image.

The 'brief subliminal image' was, of course, the priming mechanism. I only caught a glimpse of one of these: a picture of an obviously disgruntled African American man glaring at me down the sights of his 9mm.

Obviously this is just conventional priming in reverse order. The negative prime (such as angry black men) or positive prime (dunno what they were) was expected to retroactively affect the speed at which I [had already] chose 'happy' or 'sad' on the first image.

The actual experiment (ie me judging photos) lasted maybe 15 minutes. Afterward, the grad student congratulated me on my 'above average psychic performance'. In triumph I claimed my psych 101 extra credit.

Vote up Vote down Report this comment Score: 16 (16 votes cast)
Psychologists use statistic... (Below threshold)

January 18, 2011 8:04 AM | Posted by SusanC: | Reply

Psychologists use statistical hypothesis testing more heavily than physicists, so if the problem with the paper is incorrect use of a statistical test (or just plain false positives--which you know will happen ocassionally...) then a psychologist is probably a better reviewer than a physicist. (e.g. Compare the amount of time that is spent explaining these issues in undergraduate psychology courses vs. undergraduate physics courses. First year psych students are expected to know this stuff).


I like this kind of paper, but not because I believe in ESP (or homeopathy, or whatever). A good example should (a) have a blantantly implausible conclusion (b) use methods at least as rigorous as the papers that normally appear in the same journal. The real lesson is not that ESP exists, but that some other papers in the same journal are going to be false positives too. (But you didn't suspect those papers, because the conclusion agree with your expectations). You knew hypothesis tests have false positives, right?

The approach suggested by one of the NYT interviewees--require larger sample sizes when the (Bayesian) prior probability of the conclusion is low--has a some things going for it, but is, IMHO, the wrong way to go. Filtering out the results you know to be implausible on a priori grounds gives readers a false sense of confidence in the other papers in the journal, which you know will occasionally be bogus.[*]

[*] In case of lawsuits (cf. Simon Singh) :-) a "bogus" result is one that turns out not to be replicable, and does not necessarily imply fraud on the part of the author. We've all been caught out by these (assuming a base rate of zero, about 5% of the time, if you're using 95% confidence level).

Vote up Vote down Report this comment Score: 4 (4 votes cast)
My claim is that we need to... (Below threshold)

January 18, 2011 8:40 AM | Posted, in reply to gwern's comment, by Laura: | Reply

My claim is that we need to be treating MORE papers as though they require extraordinary evidence. ESP is a taboo in Western culture- would it require as much scrutiny if we lived in a culture where ESP is not a taboo? The point being, this paper is getting scrutiny because of the TOPIC, not the methods. These methods are standard in cognitive testing. And trying to determine a theory of mind, motivation, etc etc etc that cognitive psychologists study might as well be as figurative as something like ESP.

Your example uses two extremes that basically do not exist in most areas of science, and science typically works between there. So where is the tipping point? Where does the line change from "a man left his umbrella at home?" to "we are all fishes" in that grey? What does TLP spend many of his posts on? Scientific studies that have not had enough attention and scrutiny.

There are basically two types of studies in the context we are discussing:

1) Repeating the same experiment with the same methods in order to validate results. This hardly ever happens in science because you have to do new things, except with the most controversial claims, because scientists have to spend their time doing new things in order to get tenure).

2) In the context of a theory, a given experiment can only ever really partially support a theory. So secondary, and tertiary experiments are done with new methods, new animals, new etc etc etc to support other parts of the theory. These secondary, and tertiary experiments are what concerned me in my previous post, particularly when one of the authors of the 2nd & 3rd experiments generated the theory themselves (also see: pharmas who develop their own drugs, or old graduate students or postdocs of the person who came up with the theory). These papers also require close scrutiny, but don't often get it (even though it needs doing). These ones, to me, are far more insidious than the ESP paper.

Two cases where someone came out about it? Destroyed one career, the university is hushing up and covering up for the other:
Marc Hauser from Harvard: http://www.nytimes.com/2010/08/21/education/21harvard.html
Homme Hellinga from Duke:
http://dukechronicle.com/article/questions-linger-about-hellinga-case
There's an incentive to keep quiet.

Vote up Vote down Report this comment Score: 2 (2 votes cast)
That physicists are better ... (Below threshold)

January 18, 2011 8:58 AM | Posted by Guy Fox: | Reply

That physicists are better judges of quantum entanglement by virtue of their professional expertise is an odd claim coming from a pirate who moonlights as a shrink who moonlights as a critical semiotician who moonlights as a motivationally speaking misanthrope.

As for the potential of crowdsourcing the interpretation of data, the case may not be so clear. 40-60% of Americans are pretty sure that Genesis is factual (http://www.religioustolerance.org/ev_public.htm), and about half that many doubt Obama's citizenship (http://politicalticker.blogs.cnn.com/2010/08/04/cnn-poll-quarter-doubt-president-was-born-in-u-s/). Given the power of the confirmation bias, how are these people going to deal with data contrary to their prejudices? Despite the personal flaws of scientists and the institutional flaws in the media of their discourse, the fact that they (should) have been socialized to the norm that you can't know the answer before the question has been asked might be worth something. Crowdsourcing might be great for questions whose answers are low on interpretation, like how many jellybeans are in the jar, but those aren't really the interesting questions. When an objective truth might be available, it's probably a bad idea to outsource it to the prejudices of intersubjectivity. A million flies eat sh!t; a million flies can't be wrong?

Vote up Vote down Report this comment Score: 3 (7 votes cast)
Consensus is not built by e... (Below threshold)

January 18, 2011 5:18 PM | Posted, in reply to Guy Fox's comment, by Francois Tremblay: | Reply

Consensus is not built by every single person who may or may not know anything about a subject: it is built by masses of people who do know about the subject. A hundred random people cannot beat a grandmaster, but a hundred grandmasters can beat a world champion. "Crowdsourcing" works because the "crowd" can discuss and apply expertise, even if each only has one part of the expertise needed.

Vote up Vote down Report this comment Score: 4 (4 votes cast)
Thank you for again providi... (Below threshold)

January 18, 2011 6:49 PM | Posted by David: | Reply

Thank you for again providing a commentary that is unique in the blogosphere and media, and yet surprisingly obvious once stated.

Two comments:
1. The main flaw in the study was that there weren't 9 experiments. There were untold dozens of experiments, of which only the 9 most significant were discussed in detail. It was an exploratory study, not a confirmatory study.

2. The paper was demolished by statisticians back in November. By the time that NPR and the NYT wrote their articles, it was old news.

Vote up Vote down Report this comment Score: 2 (2 votes cast)
Hello, I'm Kate Wharmby Sel... (Below threshold)

January 19, 2011 1:48 PM | Posted by Kate Seldman: | Reply

Hello, I'm Kate Wharmby Seldman, the Health Editor at Opposing Views (www.opposingviews.com). Sorry to spam your comments, but I couldn't find a contact email on your site.

We're a growing online news/information platform that publishes expert opinions, analysis, questions and answers from groups such as PETA, the NRDC, the NRA and Amnesty International. Each month hundreds of thousands of readers come to Opposing Views to learn about and discuss important issues.

In an effort to build a significant mental health section, we're reaching out to experts on the subject, and I thought The Last Psychiatrist would make a strong addition. If you're interested, we'd like you to become an Opposing Views expert.

What does being an expert mean? Essentially, you give us permission to publish content from your site through an RSS feed or by posting it directly on our site. We include a byline and links to your site which enhance your position on Google and other search engines. In addition, the majority of our content is carried on Google news, Facebook, Twitter and other news and media sites. Your work will also be seen on LivingWithAnxiety.com, which is an associate site run by Deep Dive Media. It's a great way to reach an even wider audience.

Please contact me with any comments and/or questions, and let me know if you'd like to join us.

Kate Wharmby Seldman
Health Editor
[email protected]
Opposing Views
www.opposingviews.com
Los Angeles, CA
310-488-6847

Vote up Vote down Report this comment Score: -14 (14 votes cast)
By the looks of it, Bem fai... (Below threshold)

January 19, 2011 3:20 PM | Posted by Anonymous: | Reply

By the looks of it, Bem failed to perform the Bonferroni Correction.
http://en.wikipedia.org/wiki/Bonferroni_correction

Or if you are Bayesian, read "Why Psychologists Must Change the Way They Analyze Their Data: The Case of Psi" by Wagenmakers: http://dl.dropbox.com/u/1018886/Bem6.pdf

Vote up Vote down Report this comment Score: 0 (0 votes cast)
Monsieur Tremblay, it would... (Below threshold)

January 20, 2011 5:27 AM | Posted, in reply to Francois Tremblay's comment, by Guy Fox: | Reply

Monsieur Tremblay, it would seem that we agree. If consensus "is built by masses of people who do know about the subject" because "the "crowd" can discuss and apply expertise, even if each only has one part of the expertise needed," then it sounds like you're proposing the same kind of open, scientific discourse I am. That is, let people who know what they're talking about hash it out in a public forum rather than letting The Truth come from 'any one of the mofos on metafilter' with an axe to grind. I'll even grant that many scientists have axes to grind, but their peers are generally more adept at identifying and deconstructing invalid arguments than most of Oprah's studio audience. Nous sommes d'accord, n'est-ce pas?

Vote up Vote down Report this comment Score: 0 (0 votes cast)
Apparently, twitter et al w... (Below threshold)

January 20, 2011 8:33 AM | Posted by Keith: | Reply

Apparently, twitter et al would make people too freaked out and intimidated to do research:

http://www.nature.com/news/2011/110119/full/469286a.html?s=news_rss

Funny. And yet it doesn't keep me from posting all those drunken pictures...party on, brahs!

Vote up Vote down Report this comment Score: 1 (1 votes cast)
And now even in MSM.<... (Below threshold) And now even in MSM.<... (Below threshold) Ok doc, can we talk about <... (Below threshold)

January 25, 2011 4:07 AM | Posted by Whatever: | Reply

Ok doc, can we talk about Skins now?

Vote up Vote down Report this comment Score: 0 (0 votes cast)
I knew you were gonna say t... (Below threshold)

January 25, 2011 10:52 AM | Posted by medsvstherapy: | Reply

I knew you were gonna say that.
All of you.

Vote up Vote down Report this comment Score: 0 (0 votes cast)
(No, "available upon reques... (Below threshold)

February 7, 2011 12:32 PM | Posted by Anonymous: | Reply

(No, "available upon request" does not cut it.)

Amen to that.

Vote up Vote down Report this comment Score: 0 (0 votes cast)
[url=http://www.softassembl... (Below threshold)

June 12, 2014 3:59 AM | Posted by Psychic coupon: | Reply

[url=http://www.softassembly.com/Psychic.html]Psychic coupon code[/url].

Vote up Vote down Report this comment Score: -1 (1 votes cast)
This web site is really a w... (Below threshold)

May 25, 2015 1:45 AM | Posted by justiceh: | Reply

This web site is really a walk-through for all of the info you wanted about this and didn’t know who to ask. Glimpse here, and you’ll definitely discover it…. Online Casino Games

Vote up Vote down Report this comment Score: -1 (1 votes cast)
You should comment on the c... (Below threshold)

May 25, 2015 1:51 AM | Posted by lelew: | Reply

You should comment on the competition comparison of the blog. You can highlight it's mind boggling. Your blog exploration/tour will broaden your conversions. Italy Holiday

Vote up Vote down Report this comment Score: -1 (1 votes cast)
Thanks most with this fanta... (Below threshold)

June 30, 2015 7:36 AM | Posted by ben: | Reply

Thanks most with this fantastic new site. I’m terribly dismissed up to point out it to anyone. It makes Pine Tree State therefore happy your immense understanding and knowledge have a brand new channel for making an attempt into the planet. Kitchen Decorating Ideas

Vote up Vote down Report this comment Score: -1 (1 votes cast)