Latest Post

Running Power Simulations with the Power of R!

Psychology, Science · · 4 comments

As one of the resident “stats people” (as opposed to “regular people”) in my department, one of the most common questions I get asked is about power analyses. This is partly because more psychology journals are requiring explicit discussions about statistical power. The problem is that if you do studies that require analyses anything more complicated than a t-test or correlations, things can get a little bit hairy. You can do calculations for analyses like regression, but the effect sizes used for these calculations (like Cohen’s f2) are often uncommon, to the point where you might not even be clear if you’re using the correct test or calculating the effect size correctly. And then, as you move up in complexity to analyses like multilevel models, SEM, etc., there isn’t even an analytic solution. That’s where power simulations come in.Continue Reading

Categories

More Posts

The Final Conversation

Personal · · Leave a comment

As your face flickered onto my screen, I could see you looked tired. I had called you on Skype for Mother’s Day. You had just been talking with Jennifer, and Dad said you were very tired. But truth be told, you looked worse than tired. You looked like someone struggling to keep up with a conversation that had turned into a foreign language. Nods and occasional whispers were all you could muster. But it was Mother’s Day, and your son was calling.

Dad and I spent most of the time talking. You sat resting, your eyes occasionally fluttering open, but mostly I couldn’t even tell if you were even awake. Dad and I chatted about all the usual things: How school was going, what I had been up to, what the weather was like here and over there. All the while we danced around the biggest topic, the one that all of us knew concerned us most but for which there was nothing left to say. The tumour in your brain was at the helm now, and we were sailing the sea of inevitability toward an end no one wanted to reach. There was no turning back, no slowing down. There was only the time it would take, and no more. So what else was there to say?Continue Reading

Hipster Ariel - So Meta

Minding the Meta-Analysis

Psychology, Science · · 2 comments

A little over a week ago, I had the opportunity to go to yet another meeting of the Society for Personality and Social Psychology (SPSP). It’s always a great time, with plenty of very interesting talks and posters! It’s also always a pleasure to travel from the harsh Canadian winter to someplace warm to talk about psychology. Walking around in a t-shirt in February is not a common experience for me.

Perhaps this is just my perception, but over the past few years there seems to be a growing trend toward people doing meta-analyses of the studies they present. I’m sure you know what I’m talking about: they present three studies, and maybe the last one has only a marginal effect, but then they say, “But when you meta-analyze over all three studies, the overall effect is highly significant.” This year I saw at least a couple people do this in their talk, and I’ve seen it before at previous conferences and in other contexts. So I want to talk just a little bit about these informal mini-meta-analyses—to distinguish them from more formal meta-analyses, I’m going to call them “meso-analyses”—and talk about some of the caveats of this technique.Continue Reading

R-index, unbiased, varying num. studies

Evaluating the R-Index and the P-Curve

Psychology, Science · · 4 comments

Several years ago, Uri Simonsohn (along with Leif Nelson and Joe Simmons) introduced the psychology community to the idea of p-hacking, and his related concept for detecting p-hacking, the p-curve. He later demonstrated that this p-curve could be used as an estimate of true effect size in a way that was better at correcting for bias than the common trim-and-fill method.

Now, more recently, Ulrich Schimmack has been making a few waves himself, using his own metric called the R-index, which he has stated is useful as a test of how likely a result is to be replicable. He has also gained some attention for using it as what he refers to as a “doping test”, to identify areas of research—and researchers themselves—that are likely to have used questionable research practices (QRPs) that may have inflated the results. In his paper, he shows that his R-index indicates an increase in QRPs from research in 1960 to research in 2011. He also shows that this metric is able to predict the replicability of studies, by analyzing data from the Reproducibility Project and the Many Labs Project.Continue Reading