A little over a week ago, I had the opportunity to go to yet another meeting of the Society for Personality and Social Psychology (SPSP). It’s always a great time, with plenty of very interesting talks and posters! It’s also always a pleasure to travel from the harsh Canadian winter to someplace warm to talk about psychology. Walking around in a t-shirt in February is not a common experience for me.
Perhaps this is just my perception, but over the past few years there seems to be a growing trend toward people doing meta-analyses of the studies they present. I’m sure you know what I’m talking about: they present three studies, and maybe the last one has only a marginal effect, but then they say, “But when you meta-analyze over all three studies, the overall effect is highly significant.” This year I saw at least a couple people do this in their talk, and I’ve seen it before at previous conferences and in other contexts. So I want to talk just a little bit about these informal mini-meta-analyses—to distinguish them from more formal meta-analyses, I’m going to call them “meso-analyses”—and talk about some of the caveats of this technique.Continue Reading
Several years ago, Uri Simonsohn (along with Leif Nelson and Joe Simmons) introduced the psychology community to the idea of p-hacking, and his related concept for detecting p-hacking, the p-curve. He later demonstrated that this p-curve could be used as an estimate of true effect size in a way that was better at correcting for bias than the common trim-and-fill method.
Now, more recently, Ulrich Schimmack has been making a few waves himself, using his own metric called the R-index, which he has stated is useful as a test of how likely a result is to be replicable. He has also gained some attention for using it as what he refers to as a “doping test”, to identify areas of research—and researchers themselves—that are likely to have used questionable research practices (QRPs) that may have inflated the results. In his paper, he shows that his R-index indicates an increase in QRPs from research in 1960 to research in 2011. He also shows that this metric is able to predict the replicability of studies, by analyzing data from the Reproducibility Project and the Many Labs Project.Continue Reading
Back in May, Uri Simonsohn posted an article to his blog about studying effect sizes in the lab, with the general conclusion that the sample sizes needed for a precise enough estimate of effect size make it essentially not feasible for the majority of lab studies. Although I was not at the recent SESP conference, I have been told he discussed this (and more!) there. Felix Schönbrodt further discussed Simonsohn’s point, noting that reporting the effect size estimates and confidence intervals is still important even if they are wildly imprecise, because they can still be used in a meta-analysis to achieve more precision. I think both of these posts are insightful, and recommend that you read them both. However, both of them use particular examples with a given level of precision or sample size to illustrate their points. I wanted to go a bit more in-depth on how the precision level and effect size changes the sample size needed, using a tool in R that Schönbrodt pointed out.Continue Reading
I’ve been watching the recent debate about replication with interest, concern, and not just a little amusement. It seems everyone has their opinion on the matter (leave it to a field of scientists to have twice as many opinions as there are scientists in the field!), and at times the discussion has been quite heated. But as a grad student, it’s been difficult to know whether I should throw my own hat in the ring. With psychology heavyweights like Kahneman and Gilbert voicing their opinions, what room is there for a third-year grad student? But fortunately (or unfortunately), I’ve never been one to know when to keep my opinions to myself, so I want to present my own thoughts on the matter. My perspective is that, even if the issue gets heated at times, this discussion can be fruitful as we learn to navigate a changing discipline.Continue Reading
Over the past year, psychology as a field, and in particular social psychology, has come under scrutiny after several notable cases of scientific fraud. The most notable was Diederik Stapel, who outright fabricated data for at least 30 publications. A couple other cases of data manipulation and fraud have just surfaced recently, leading to further resignations of researchers in the field. Amidst these news stories, some have asked the question, “Is psychology trustworthy? Is it even a science at all?”
Of course, these are not new questions for psychology to deal with. Making the case for psychology as a science has been a continual process over the years, and psychology to some extent still suffers from the impression that has remained from the psychoanalytic tradition of Freud. The psychoanalysts loved to sit people on couches and talk about dreams and repressed childhood memories and so on. But we’re past Freud. Honest.
However, given the recent scrutiny, I thought it appropriate to take the time to address the question again and argue that yes, psychology is indeed a science. I come from the perspective of a graduate student in social psychology—traditionally the most “suspect” of the areas in psychology—and as such, most of my experience and examples come from that area. I approach this question from the “if it looks like a duck, swims like a duck, and quacks like a duck, then it probably is a duck” approach (see, I’m using the scientific method already!). I would like to argue that psychology operates very similarly to other fields of science that are not in dispute—the so-called “hard sciences”. So let me outline just a few of the ways in which psychology parallels these fields.Continue Reading
Prejudice is still alive and well in many areas of our society. And one mechanism that keeps prejudice alive is the perception of the accuracy of negative stereotypes. For instance, before slavery was made illegal in the US, slave owners would sometimes justify slavery by stating that God made black people less intelligent and more suited for manual labour. And of course, when they looked around, this perception was justified, since slaves with no formal education and with many years of performing manual labour generally fit the stereotype. Thus, a feedback loop was formed, where the stereotype supported the system, and the system supported the stereotype.Continue Reading
I recently completed my Honours thesis as a component of my BA degree in Honours Psychology. This thesis involved about a year’s worth of work from start to finish—planning out the study, doing a literature review, developing the materials, getting ethics clearance, running the study, collecting the results, analyzing the results, and writing it all up. Needless to say, it feels good to be finished it. I thought it might be a good idea to talk a little bit about the topic and about what I found. Essentially, the main purpose of the research was to look at the association between ultimate justice and revenge. I’ll start off explaining each of these in a little more detail, and then tell you what I found in my own study.Continue Reading