Over the past year or two I have been trying to delve into the world of machine learning, to angle myself for a job in data science. (Hire me!) Data science is a pretty broad discipline, and covers everything from basic descriptives and visualizations to complex deep learning algorithms and AI. But a key part of data science is machine learning. As I have gone through this process of understanding machine learning, however, I’ve realized that there are a number of tools and procedures that would be useful in psychology as well. So let me share with you some of the wonders of machine learning!Continue Reading
As your face flickered onto my screen, I could see you looked tired. I had called you on Skype for Mother’s Day. You had just been talking with Jennifer, and Dad said you were very tired. But truth be told, you looked worse than tired. You looked like someone struggling to keep up with a conversation that had turned into a foreign language. Nods and occasional whispers were all you could muster. But it was Mother’s Day, and your son was calling.
Dad and I spent most of the time talking. You sat resting, your eyes occasionally fluttering open, but mostly I couldn’t even tell if you were even awake. Dad and I chatted about all the usual things: How school was going, what I had been up to, what the weather was like here and over there. All the while we danced around the biggest topic, the one that all of us knew concerned us most but for which there was nothing left to say. The tumour in your brain was at the helm now, and we were sailing the sea of inevitability toward an end no one wanted to reach. There was no turning back, no slowing down. There was only the time it would take, and no more. So what else was there to say?Continue Reading
Several years ago, Uri Simonsohn (along with Leif Nelson and Joe Simmons) introduced the psychology community to the idea of p-hacking, and his related concept for detecting p-hacking, the p-curve. He later demonstrated that this p-curve could be used as an estimate of true effect size in a way that was better at correcting for bias than the common trim-and-fill method.
Now, more recently, Ulrich Schimmack has been making a few waves himself, using his own metric called the R-index, which he has stated is useful as a test of how likely a result is to be replicable. He has also gained some attention for using it as what he refers to as a “doping test”, to identify areas of research—and researchers themselves—that are likely to have used questionable research practices (QRPs) that may have inflated the results. In his paper, he shows that his R-index indicates an increase in QRPs from research in 1960 to research in 2011. He also shows that this metric is able to predict the replicability of studies, by analyzing data from the Reproducibility Project and the Many Labs Project.Continue Reading
Back in May, Uri Simonsohn posted an article to his blog about studying effect sizes in the lab, with the general conclusion that the sample sizes needed for a precise enough estimate of effect size make it essentially not feasible for the majority of lab studies. Although I was not at the recent SESP conference, I have been told he discussed this (and more!) there. Felix Schönbrodt further discussed Simonsohn’s point, noting that reporting the effect size estimates and confidence intervals is still important even if they are wildly imprecise, because they can still be used in a meta-analysis to achieve more precision. I think both of these posts are insightful, and recommend that you read them both. However, both of them use particular examples with a given level of precision or sample size to illustrate their points. I wanted to go a bit more in-depth on how the precision level and effect size changes the sample size needed, using a tool in R that Schönbrodt pointed out.Continue Reading