Abbreviated Science Round-up: The science of science
newsdepo.com
For the last decade, the science of psychology has been in what is often termed a ‘crisis.’ In the biggest project of its kind, Brian Nosek, a social psychologist and head of the Center for Open Science in Charlottesville, Virginia, and 269 co-authorsAbbreviated Science Round-up: The science of science
For the last decade, the science of psychology has been in what is often termed a ‘crisis.’ In the biggest project of its kind, Brian Nosek, a social psychologist and head of the Center for Open Science in Charlottesville, Virginia, and 269 co-authors repeated work reported in 98 original papers from three psychology journals, to see if they independently came up with the same results. … According to the replicators' qualitative assessments, as previously reported by Nature, only 39 of the 100 replication attempts were successful. Widespread concern over the failure of some well-respected psychological research to stand the test of replication went back to at least 2010, when reports of errors, bad methodology, and plain old fraud roiled the academic community. Nosek’s 2015 review wasn’t the first of its kind, but the scope of the issues he found, affecting a broad area of the field, only increased the uproar. These weren’t just just any academic papers. Some of them were foundational to many other studies. Among the research that failed to meet the reproducability test, were some that were behind assumptions about which people were suited for particular jobs, about how soldiers would behave on the battlefield, even about how children should be taught in school. The results of this and other surveys of the field tossed psychology on its ear. Some, including many non-scientists who had an interest in seeing the field degraded, felt vindicated. Some of those involved in the studies that had not been verified took the paper not just as an attack on their live’s work, but as a personal challenge to their honesty and integrity. But many scientists were happy that this fight was being waged … over there. In psychology. One of those soft sciences where results were all too often dependent on interpreting responses and reactions. It wasn’t something that could affect biology. Or chemistry. Or the squeaky clean realms of physics. Except, of course, it could. As scientists increasingly press forward in areas where the “easy” work was wrung out decades ago, or conduct experiments that take the resolution of their measuring instruments to the limit, even the hardest of hard science can fall prey to subjective interpretation and wishful thinking. Reproduction of results when the tiniest change in experimental design can swamp results can generate hair-pulling frustration. Ask the people who have dealt with “cold fusion” for decades. Or those attempting to verify announcements of reactionless “EM drives” today. Or ask any of the researchers who have lately be caught, if not falsifying results, at least cherry picking for success. The truth is that all fields of human science are subject to … humans. Who don’t always quash all the variables, or think through every aspect of experimental design, and who may or may not be perfectly clinical when searching for a lump of gold among statistical dross. So several of this week’s papers are about scientists looking to do better science. And if that sounds boring … nope, not at all. Come on, let’s go read some papers. Read more