PLAY PODCASTS
Episode 104: Quantifying the Narrative of Replicable Science
Episode 104

Episode 104: Quantifying the Narrative of Replicable Science

Yoel and Alexa discuss a recent paper that aims to estimate the replicability of psychology as a discipline by analyzing the words used to describe studies. After a deep dive into the nuts and bolts of the methodology they discuss the factors that make for the most (and least) replicable science.

Two Psychologists Four Beers

March 29, 20231h 9mExplicit

Audio is streamed directly from the publisher (aphid.fireside.fm) as published in their RSS feed. Play Podcasts does not host this file. Rights-holders can request removal through the copyright & takedown page.

Show Notes

Yoel and Alexa discuss a recent paper that takes a machine learning approach to estimating the replicability of psychology as a discipline. The researchers' investigation begins with a training process, in which an artificial intelligence model identifies ways that textual descriptions differ for studies that pass versus fail manual replication tests. This model is then applied to a set of 14,126 papers published in six well-known psychology journals over the past 20 years, picking up on the textual markers that it now recognizes as signals of replicable findings. In a mysterious twist, these markers remain hidden in the black box of the algorithm. However, the researchers hand-examine a few markers of their own, testing whether things like subfield, author expertise, and media interest are associated with the replicability of findings. And, as if machine learning models weren't juicy enough, Yoel trolls Alexa with an intro topic hand-selected to infuriate her.

Links:

Topics

artificial intelligenceNFTsreplicabilityexperiments$50000 conversations