Posts Tagged “statistics”

Running Power Simulations with the Power of R!

Psychology, Science · · 4 comments

As one of the resident “stats people” (as opposed to “regular people”) in my department, one of the most common questions I get asked is about power analyses. This is partly because more psychology journals are requiring explicit discussions about statistical power. The problem is that if you do studies that require analyses anything more complicated than a t-test or correlations, things can get a little bit hairy. You can do calculations for analyses like regression, but the effect sizes used for these calculations (like Cohen’s f2) are often uncommon, to the point where you might not even be clear if you’re using the correct test or calculating the effect size correctly. And then, as you move up in complexity to analyses like multilevel models, SEM, etc., there isn’t even an analytic solution. That’s where power simulations come in.Continue Reading

Normal Distribution

You (Probably) Don’t Need to Test Your Data for Normality

Anyone who analyzes data knows (or should know!) the importance of not violating the assumptions of the tests one runs. And for common tests like t-tests, correlation, ANOVA, and regression, one of the assumptions is that the variables are normally distributed. One method that some people use, then, is a test for normality of the data, such as the Kolmogorov-Smirnov (K-S) test or the Shapiro-Wilk (S-W) test. If the test indicates a deviation from normality, they might try a transformation, or use a more robust statistical test to analyze their data. I’m here to say that this is going to make life hard for yourself. Here’s the summary of this article right up front: If you want to see if normality assumptions are violated, don’t use a normality test.Continue Reading

Hipster Ariel - So Meta

Minding the Meta-Analysis

Psychology, Science · · 2 comments

A little over a week ago, I had the opportunity to go to yet another meeting of the Society for Personality and Social Psychology (SPSP). It’s always a great time, with plenty of very interesting talks and posters! It’s also always a pleasure to travel from the harsh Canadian winter to someplace warm to talk about psychology. Walking around in a t-shirt in February is not a common experience for me.

Perhaps this is just my perception, but over the past few years there seems to be a growing trend toward people doing meta-analyses of the studies they present. I’m sure you know what I’m talking about: they present three studies, and maybe the last one has only a marginal effect, but then they say, “But when you meta-analyze over all three studies, the overall effect is highly significant.” This year I saw at least a couple people do this in their talk, and I’ve seen it before at previous conferences and in other contexts. So I want to talk just a little bit about these informal mini-meta-analyses—to distinguish them from more formal meta-analyses, I’m going to call them “meso-analyses”—and talk about some of the caveats of this technique.Continue Reading

Sample size for given CI precision and effect size

The Price of Precision

Back in May, Uri Simonsohn posted an article to his blog about studying effect sizes in the lab, with the general conclusion that the sample sizes needed for a precise enough estimate of effect size make it essentially not feasible for the majority of lab studies. Although I was not at the recent SESP conference, I have been told he discussed this (and more!) there. Felix Sch√∂nbrodt further discussed Simonsohn’s point, noting that reporting the effect size estimates and confidence intervals is still important even if they are wildly imprecise, because they can still be used in a meta-analysis to achieve more precision. I think both of these posts are insightful, and recommend that you read them both. However, both of them use particular examples with a given level of precision or sample size to illustrate their points. I wanted to go a bit more in-depth on how the precision level and effect size changes the sample size needed, using a tool in R that Sch√∂nbrodt pointed out.Continue Reading

Teacher and blackboard, in black and white

8 Courses Every School Should Teach

Education is crucial to the functioning of a strong, healthy society, because today’s modern society is built upon knowledge and information. And during our many, many years of education, we learn math, science, history, English, art, health, and more. These are all good things, and important for a well-rounded education. But in amongst this smorgasbord of studying, there are several topics that are generally not covered that I think are important for every school to teach. Some of these might not need an entire course to cover them, but at the very least, these are topics that I think every school should be sure to include in their curricula—preferably as early as possible. Let me share with you my thoughts.Continue Reading

The Null Hypothesis

Religion, Science · · 12 comments

One of the most accurate ways to describe my religious beliefs (or lack thereof) is by way of a concept known as the “null hypothesis”. Like most atheists, I do not claim that I know God does not exist. I merely claim that there is not enough evidence to justify belief in God. And the best way to illustrate this claim is through the null hypothesis. This is a statistical concept that is used for hypothesis testing in science. Because statistics is not a strong point for many people, I will try to explain it using a minimum of stats jargon; however, some will be required, and I will try to explain what each term means the best that I can. I really feel that this is an important concept to understand when one is trying to assess evidence claims (which happens to us all the time). So hang on for the ride!Continue Reading