Power Posing: Social Psych on the Edge of Irrelevancy

September 30, 2016

 

I haven't written frequently here in a bit. Reading for exams and drafting a dissertation proposal are taking up a lot of time. That said, I've come across several articles in the past two weeks expanding on bad science in social psychology. This largely stems from a batch of replication studies that attempted to recreate results from previous experiments and observational studies. The results were a mixed bag. Some of these meta-analyses were able to replicate less than half of the results. Another group claimed they replicated 20 experiments with up to a 0.80 correlation.

 

That was just the beginning. Andrew Gelman has been a staunch critic of statistical inferences used by social psychologists. He seems to be really agitated by the study that claims hurricanes with female names cause more damage because people do not find them threatening and the study that claims "air rage" is more common in airplanes with first class cabins. Both were too irresistible for the media not to pick up without checking to see if the data match the claims. (Spoiler: They don't.)

 

The problem, among many, Gelman points to is groups of social scientists who design studies and analyze data in a way that will be picked up by the media. In order to show the results are "significant," researchers deploy statistical tomfoolery commonly known as "p-hacking" or "researcher degrees of freedom." If you have no idea what those terms mean, think of "tipping the scales" with a few changes here and there to change a value of .053 to .047. This may seem arbitrary, but in disciplines where null hypothesis testing and p-values determine whether or not results are "significant," or more accurately, will get published, it makes all the difference.

 

 

Most recently, the "power pose" study (you may have seen the TED Talk) has been revealed as bogus by one of its co-authors. Some of the analytical and study design choices revealed would get an undergraduate an F in a methods class. For example, disclosing the hypothesis you are testing to a research subject is generally not a good idea unless there are ethical considerations. Sampling until getting the desired result is also out of bounds.

 

<sarcasm> But in this day and age, who cares about ethics? </sarcasm>

 

You may be asking yourself, "Why are you writing about this on a Saturday morning?"

 

Because it is incredibly disappointing.

 

This behavior, largely conducted by tenured professors whose dismissal is quite difficult, even after evidence of unethical behavior surfaces, is harmful. These scholars are doing a disservice to themselves, their students, and the public. They are teaching a generation of their graduate students that this is how science is done--publish or perish at any cost. In today's academic landscape, one must publish, constantly, but here's the thing about human behavior: It is often non-linear and does not always fit nicely on a regression line, the sampling and data are often flawed, and statistical inference should not be conducted. Meanwhile, many top academic journals require these frequentist statistical methods to model human behavior.

 

Enter Susan T. Fiske.

 

Fiske is noted for her work on stereotypes and discrimination. Along with co-author Peter Glick, she is responsible for creating the theory of ambivalent sexism, that I use a lot in my research.

 

(Side note: Given all these replication failures, I contacted Glick for the ambivalent sexism data, and he gladly obliged. Based on my secondary analysis, I am confident including it in my work. There doesn't appear to be any methodological hacking that I could tell, though I make no claim to be a statistical expert.)

 

Recently, a column Fiske penned for the Observer, a publication of the Association for Psychological Science, was leaked via Dropbox before it was published. In the missive, Fiske calls criticism of flawed methods, among other things, "terrorism." Fiske believes criticism of flawed science should occur behind closed doors or behind paywalls. "We are going to build a paywall. And we are going to make scientists and universities pay for it." That is not quite an exaggeration of the larger problem here.

 

The old guard seems frightened of open science and open access journals. The mysteries of statistics are not as mysterious anymore. It is no longer the gatekeeper knowledge it used to be.

 

To that point, it is incredibly discouraging that a well respected psychologist would pen such a letter in which criticism is dismissed as "bullying," especially given the power she has an editor for academic journals in which she can decide on her own whether or not a manuscript goes out for peer review.

 

What is the motivation for publishing junk science? As I alluded to earlier, publish or perish is part of it. But the larger motivation is likely keeping a steady flow of funds pouring through labs and departments to conduct even more junk science. Meanwhile, graduate students under these people's tutelage could be socialized into norms of unethical and harmful behavior by publishing crap science about gendered names of hurricanes, air rage due to first class cabins, or how driving an expensive car can lead to immoral behavior.

 

Sociology is not immune to this phenomenon. Take Sudhi Venkatesh who wrote a bestseller called Gang Leader for a Day: A Rogue Sociologist Takes to the Streets. Rogue indeed. Venkatesh admittedly deceived his dissertation chair while conducting the ethnography that was the basis of the book. Some scholars claimed he exploited drug addicts for personal gain. He would later go to Columbia where allegations of professional misconduct and financial misdealings were investigated to the point he reimbursed the center he ran $13,000. More damning, however, were allegations that the sex workers he was researching in New York were being exploited for his own personal gain. Similar allegations were levied against him when he was studying gangs in Chicago.

 

Alice Goffman is another example of this type of behavior. Her bestseller On the Run gained critical acclaim. It's a good book, as it reads like a novel. And that is part of the problem. Goffman made egregiously horrible choices to the point that she rode along with a subject to a murder. Yes, a murder. This led to someone fact checking her ethnography whose participants should not be identifiable. It is worth a read. There are also inconsistencies throughout her book, and the dissertation the book is based off of is only accessible at Princeton University in a restricted room with no recording devices allowed. What?

 

Let's not forget about the UCLA political science doctoral student who made up an entire study or the numerous scholars who have been busted for plagiarizing Wikipedia, other scholars' work, and their own work to the point of copying and pasting entire paragraphs.

 

Let's also not forget about graduate students who conduct ethnographies and do not disclose to their participants they are collecting data on them. This actually happens, they get jobs as tenure-track professors, and it is utterly disturbing how bad, and even toxic, behavior is routinely rewarded in the halls of the academy.

 

I suppose what really irks me about this is my own quandary. As a first year graduate student, I was eager to get my hands dirty with some research. I was working on my thesis project that involved in-depth interviewing, but I wanted to run a larger N study examining perceptions of high school/teacher sexual misconduct.

 

Like economists and psychologists, sociologists sometimes use a university's human subjects pool to conduct research. Students in certain courses are required to participate in these studies. Selection bias, anyone? At any rate, I pushed forward with the study and collected data with over 1,100 responses. However, some completed the study in 30 seconds telling me that they just clicked through it. So, I eliminated those responses for further analysis. 

 

I had some findings, and they were interesting. However, it occurred to me that undergraduate students may not be the best sample. So, I ran the study on Amazon Mechanical Turk a year later. I got wildly different results. There is the dilemma: I could pick and choose whatever I wanted to and include it in a manuscript, leaving out the undesirable findings. Fortunately, I have a sense of ethics and tossed the study in the garbage bin. I still play with the data to learn new methods, but that study will never see the light of day, as it is junk science.

 

However, I probably could have gotten away with it, it would have wound up in The Atlantic because it was a sexy story, and I would have a wonderful line on my CV. This is the ethos sometimes encouraged in academia. Ethical lines become fuzzy, statistical tools become weapons, and data are inconsequential to the researcher's preferred outcomes as long as there is another publication on a CV.

 

Given all this, who is going to change it? Until the structural issues are addressed where scholars with power and funding call the shots, junk science and its institutional support will continue to proliferate. That does not mean I have to be a part of it. I may have fewer lines on my CV, but at least I can sleep at night not having given a TED Talk and numerous media interviews about junk science.

 

 

 

 

 

Share on Facebook
Share on Twitter
Please reload

© 2019 Swede White