Anyone who analyzes data knows (or should know!) the importance of not violating the assumptions of the tests one runs. And for common tests like t-tests, correlation, ANOVA, and regression, one of the assumptions is that the variables are normally distributed. One method that some people use, then, is a test for normality of the data, such as the Kolmogorov-Smirnov (K-S) test or the Shapiro-Wilk (S-W) test. If the test indicates a deviation from normality, they might try a transformation, or use a more robust statistical test to analyze their data. I’m here to say that this is going to make life hard for yourself. Here’s the summary of this article right up front: If you want to see if normality assumptions are violated, don’t use a normality test.Continue Reading
A little over a week ago, I had the opportunity to go to yet another meeting of the Society for Personality and Social Psychology (SPSP). It’s always a great time, with plenty of very interesting talks and posters! It’s also always a pleasure to travel from the harsh Canadian winter to someplace warm to talk about psychology. Walking around in a t-shirt in February is not a common experience for me.
Perhaps this is just my perception, but over the past few years there seems to be a growing trend toward people doing meta-analyses of the studies they present. I’m sure you know what I’m talking about: they present three studies, and maybe the last one has only a marginal effect, but then they say, “But when you meta-analyze over all three studies, the overall effect is highly significant.” This year I saw at least a couple people do this in their talk, and I’ve seen it before at previous conferences and in other contexts. So I want to talk just a little bit about these informal mini-meta-analyses—to distinguish them from more formal meta-analyses, I’m going to call them “meso-analyses”—and talk about some of the caveats of this technique.Continue Reading
Several years ago, Uri Simonsohn (along with Leif Nelson and Joe Simmons) introduced the psychology community to the idea of p-hacking, and his related concept for detecting p-hacking, the p-curve. He later demonstrated that this p-curve could be used as an estimate of true effect size in a way that was better at correcting for bias than the common trim-and-fill method.
Now, more recently, Ulrich Schimmack has been making a few waves himself, using his own metric called the R-index, which he has stated is useful as a test of how likely a result is to be replicable. He has also gained some attention for using it as what he refers to as a “doping test”, to identify areas of research—and researchers themselves—that are likely to have used questionable research practices (QRPs) that may have inflated the results. In his paper, he shows that his R-index indicates an increase in QRPs from research in 1960 to research in 2011. He also shows that this metric is able to predict the replicability of studies, by analyzing data from the Reproducibility Project and the Many Labs Project.Continue Reading
Back in May, Uri Simonsohn posted an article to his blog about studying effect sizes in the lab, with the general conclusion that the sample sizes needed for a precise enough estimate of effect size make it essentially not feasible for the majority of lab studies. Although I was not at the recent SESP conference, I have been told he discussed this (and more!) there. Felix Schönbrodt further discussed Simonsohn’s point, noting that reporting the effect size estimates and confidence intervals is still important even if they are wildly imprecise, because they can still be used in a meta-analysis to achieve more precision. I think both of these posts are insightful, and recommend that you read them both. However, both of them use particular examples with a given level of precision or sample size to illustrate their points. I wanted to go a bit more in-depth on how the precision level and effect size changes the sample size needed, using a tool in R that Schönbrodt pointed out.Continue Reading
I’ve been watching the recent debate about replication with interest, concern, and not just a little amusement. It seems everyone has their opinion on the matter (leave it to a field of scientists to have twice as many opinions as there are scientists in the field!), and at times the discussion has been quite heated. But as a grad student, it’s been difficult to know whether I should throw my own hat in the ring. With psychology heavyweights like Kahneman and Gilbert voicing their opinions, what room is there for a third-year grad student? But fortunately (or unfortunately), I’ve never been one to know when to keep my opinions to myself, so I want to present my own thoughts on the matter. My perspective is that, even if the issue gets heated at times, this discussion can be fruitful as we learn to navigate a changing discipline.Continue Reading
What is the “self”? Such a question has had a multitude of answers from philosophers and psychologists throughout history. Although there is an immediate understanding of what I refer to when I say “I”, upon reflection that clarity vanishes. Do I refer to my physical body? That changes over the course of my life as my cells are replaced one by one. (If I have my arm amputated, am I still the same person?) Is it my consciousness? Then I am conceivably a different person when asleep or drunk then when awake or sober. Is it my memories and experiences? Psychology has demonstrated that recalled memories are largely a reconstruction of the brain rather than a true recollection. And what happens if I get amnesia or Alzheimer’s?
All these questions make it difficult to truly pin down what the self entails. We have some sense of continuity over time, but that continuity can be easily broken. So I’d like to take some time to examine, from a psychological perspective, just what it means to have a “self” and to have a sense of self-identity. In the process, I’d like to advance a theory of the self that suggests that at least some of the continuity we experience is illusory. Instead of being a coherent structure, the self is constantly being assembled and reassembled by our minds. So with that said, hang on to your hats, and let’s begin.Continue Reading
Over the past year, psychology as a field, and in particular social psychology, has come under scrutiny after several notable cases of scientific fraud. The most notable was Diederik Stapel, who outright fabricated data for at least 30 publications. A couple other cases of data manipulation and fraud have just surfaced recently, leading to further resignations of researchers in the field. Amidst these news stories, some have asked the question, “Is psychology trustworthy? Is it even a science at all?”
Of course, these are not new questions for psychology to deal with. Making the case for psychology as a science has been a continual process over the years, and psychology to some extent still suffers from the impression that has remained from the psychoanalytic tradition of Freud. The psychoanalysts loved to sit people on couches and talk about dreams and repressed childhood memories and so on. But we’re past Freud. Honest.
However, given the recent scrutiny, I thought it appropriate to take the time to address the question again and argue that yes, psychology is indeed a science. I come from the perspective of a graduate student in social psychology—traditionally the most “suspect” of the areas in psychology—and as such, most of my experience and examples come from that area. I approach this question from the “if it looks like a duck, swims like a duck, and quacks like a duck, then it probably is a duck” approach (see, I’m using the scientific method already!). I would like to argue that psychology operates very similarly to other fields of science that are not in dispute—the so-called “hard sciences”. So let me outline just a few of the ways in which psychology parallels these fields.Continue Reading