(Visited 89 times, 1 visits today)FacebookTwitterPinterestSave分享0 Sounds good in theory: scientists check each other with peer review, and knowledge advances. In reality, scientists are only human.Schools often present a rosy picture of science as the most reliable generator of knowledge. It uses a special scientific method, something like a secret sauce nobody else has. It employs mathematical proofs. Peer review confers additional reliability. Science marches on. Nevertheless, we have to ask some probing questions about the word “science” before it gets reified as something entirely new and different from any previous or contemporary method of inquiry. For instance, how did ancient Egyptians build the pyramids without modern science, or Mayans create accurate calendars, or Incas build Macchu Picchu, without peer review, p-values and the “scientific method”? To what extent does “science” differ from other fields in the academy, such as history, economics or even music? What subjects belong or don’t belong under the big tent we call “science”? How much of scientific activity involves plain old common sense and logic? What social, economic and cultural influences perturb the idealistic aspirations of science? As the articles below reveal, science cannot pretend to be any more reliable than the people who practice it.A litany of problems with p-values (Statistical Thinking blog). Frank Harrell is a biostatistician at Vanderbilt University. In this blog entry from Feb. 5, he lists numerous problems with a highly-trusted mathematical method for measuring “significance” of a given factor as a cause of some effect. His work-in-progress has nine reasons so far to distrust p-values. “In my opinion,” he begins, “null hypothesis testing and p-values have done significant harm to science.” How many tens of thousands of research papers are in jeopardy of irrelevance if Harrell is correct? (See Statistics in the Baloney Detector.)Certainty in complex scientific research an unachievable goal (University of Toronto). Donald Trump’s election and the Patriot’s win of the Superbowl are two recent examples of expert predictions gone awry. A new study published by the Royal Society “suggests that research in some of the more complex scientific disciplines, such as medicine or particle physics, often doesn’t eliminate uncertainties to the extent we might expect.” There’s always a “long tail of uncertainty” and a human tendency to underestimate the effect of small errors, especially as Big Data grows. Is this a problem just for soft sciences? No; “Physics studies did not fare significantly better than the medical and other research observed.”Publishing: Journals, do your own formatting (letter to Nature). It’s easy to presume that only the best-tested and more significant research makes its way into the top journals. John P. Moore, in his letter, points out reasons why the best science might actually get excluded from journals for entirely non-scientific reasons. The arcane rules of formatting submissions, which vary from journal to journal, can lead to rejections of papers that otherwise have significant value. A historian could probably find an eyebrow-raising list of important work rejected by experts.Gates Foundation research can’t be published in top journals (Richard van Noorden in Nature). Ideally, anyone who follows the best practices of the “scientific method” should have an equal chance of getting findings published. Here’s a stunning case where entirely different factors preclude that ideal. “One of the world’s most influential global health charities says that the research it funds cannot currently be published in several leading journals, because the journals do not comply with its open-access policy.” Those journals include Nature, Science, The New England Journal of Medicine and PNAS. Update 2/14/17: Nature News says that an agreement has been reached for the AAAS to publish Gates Foundation research. It appears that the Gates Foundation is pushing journals to adopt open-access policies.The Promise and Limitations of Using Analogies to Improve Decision-Relevant Understanding of Climate Change (PLoS One). Does this paper’s title set off alarm bells? Rather than examining the geological and atmospheric evidence for climate change in an unbiased way, these two authors published a paper in a science journal on how to nudge people with storytelling toward the consensus view.Heavyweight funders back central site for life-sciences preprints (Nature). Those who grew up with the comfortable aura of peer-reviewed journals may be shocked at what is going on. Scientists are flocking to “pre-print servers” that allow them to post their work before peer review. Physicists have enjoyed this alt-science phenomonen for ten years now at arXiv, a Cornell service that allows researchers to post their work in front of the public and all their peers, effectively bypassing the secretive filter of peer review. Biologists, chemists, paleontologists and other scientists are now getting on their own bandwagons with specialized pre-print sites for their fields. While some of the best papers do proceed to journal publication, many do not. Yet this new practice, while promising better transparency and fairness, is fraught with its own problems. How will the reliability of research be assessed in this new ‘wild west’ of open publication? Will it be by the number of ‘likes’ a paper gets, as on Facebook? Can rankings be manipulated by hackers? Who pays for the servers, which can cost hundreds of thousands of dollars a year to run, and what control do the funding sources wield over the content? Will the sheer volume of submissions overwhelm any attempts to gauge reliability? How reliable will any new software tools be for mining the data? Will search engines be as likely to turn up bogus findings as legitimate ones, and who decides? How will retractions and corrections be managed? What happens to publications that relied on references that were later retracted? If nothing else, this social development in science facilitated by the rise of the internet and cloud storage shows that scientific practices of any given age are not fixed in stone.Higher education: The making of US academia (Nature). Of interest to historians of science, this book review describes the social and cultural developments behind what the public considers academia today, including science. Rogers Hollingsworth, reviewer of The Rise of the Research University: A Sourcebook, shows how much of what we consider normal scientific practice today emerged after World War II. He also shows that American scientific research differs from German practice, yet both are “unpredictable” and “unstable.” He says, “US universities seem to be in existential flux, questioning their size, function, structure, nature, philosophical bases and monumental student fees.” That raises additional questions: what potential great scientists couldn’t afford the fees? How many mediocre rich kids became influential scientists because they could afford the fees? Is the student at one university with a particular philosophical base equivalent to the student at a university with a different philosophical base? Who decides if a science grad from Liberty University is less qualified than a science grad from George Mason? When the foundations are in flux, the products are also in flux.Science is a misleading word. What was considered standard practice for scientific publication fifty years ago, when students perused the Reader’s Guide to Periodical Literature and scanned lofty tomes in the library stacks, is very different in the Google-search age. So which method was right? Is the work of prior decades and centuries to be discredited as ‘unscientific’ by contemporary standards? Or are contemporary standards in violation of acceptable norms? What becomes of the Nobel Prize if the rules change? If the rules and equipment of baseball evolve, did Babe Ruth really win ‘baseball’?There is nothing sacrosanct about peer review. There is nothing distinctively ‘scientific’ about it. Many great works of science were self-published, and scholars in other fields often have their work subjected to the scrutiny of their peers. C. S. Lewis questioned whether ‘modern science’ even exists. “There are only particular sciences,” he said, “all in a stage of rapid change, and sometimes inconsistent with one another.”The very word ‘science’ conceals as much as it illuminates. The root of the word ‘science’ is ‘knowledge’. Would it make any sense for a student to say, “I’m off to my Knowledge class in the Knowledge Building,” as if this signified anything substantially different from history, math, language or PE class? There is no knowledge without honesty. There is no knowledge without integrity. There is no knowledge without logic. Knowledge itself is useless without wisdom. You don’t acquire any of those things by a scientific method, by peer review, or by journal publishing.Obviously, all trust for science implodes without integrity. If you think integrity could be measured by some scientific method, think again. You would have to trust the integrity of the one doing the checking and so on up the line, ad infinitum (see Infinite Regress in the Baloney Detector). Every human investigation, from that of a child to that of a top research scientist, requires honesty—a moral quality that cannot evolve.Some consider the distinguishing thing about science is its subject matter: the ‘natural world.’ But here, too, you get into vexed issues of what is meant by ‘natural’— another word with half a dozen meanings. Big Science has arrogated to itself the investigation of matters far afield from magnetism, cells and chemicals. Journals routinely publish on politics or ethics. Evolutionary scientists in particular are guilty of this; they treat natural selection like The Blob that swallows up everything in its path, including philosophy and religion. Today’s scientists, inflated with self-importance, present themselves as experts on everything. They demand authority, expect politicians to bow to them, and demand that taxpayers offer sacrifices at their temples. It’s time to put them in their place. We’ll listen to them as long as they have something of value to say, but we reserve the right to scrutinize their logic, honesty, and evidence.