The Psychology of Totalitarianism Chapter 1. Science and Ideology
Author: Mattias Desmet Publisher: Chelsea Green Publishing: White River Junction, Vermont. Publish Date: 2022-5 Review Date: Status:đ
Annotations
33 The lack of quality in scientific research raises a few pressing questions, including about the blind peer review system, which is used in all scientific journals and is considered the ultimate seal of approval for scientific legitimacy. Peer review requires that a study be read and critically evaluated by two or three independent experts in the field before publication. These experts are supposed to be âblindâ (they donât know who conducted the study), but in reality, they usually do know the authors because they know the other researchers working in their field. Hence, they can usually guess who conducted the research. For this reason, a fair assessment by an expert requires not only that he is willing and able to free up sufficient time and energyâfar from given in the current academic climate. Moreover, it requires that he is capable of identifying his personal prejudices with regard to the research and its authors, and put them aside. In other words: Peer review stands or falls on the ethical and moral quality of the expertâthat is, his subjective, human characteristics.
19 Michel Foucault defines as truth-telling.2 Truth-telling is a way of speaking that breaks through an established, if implicit, social consensus. Whoever speaks the truth breaks open the solidified story in which the group seeks refuge, ease, and security. This makes speaking the truth a dangerous endeavor. It strikes fear in the group, and results in anger and aggression.
19 Truth-telling is dangerous. Yet it is also necessary. No matter how fruitful a social consensus may be at a certain time, if it is not dismantled in time and renewed, it will putrefy and eventually have a suffocating impact on society. In such times, the truth will emerge as a sincere voice that breaks through the dull refrain of an established story and lends a new sound to old and ageless words. âLe vraie est toujours neufâ (Truth is always new) (Max Jacob).3
26 To the extent that the scientific discourse became an ideology, it lost its virtue of truth-telling. Nothing illustrates this better than the so-called replication crisis that erupted in academia in 2005. This crisis emerged when a number of serious cases of scientific fraud came to light.
27 This kind of full-fledged fraud was relatively rare, however, and not actually the biggest problem. The biggest problem was with less dramatic instances of questionable research practices, which were reaching epidemic proportions. Daniele Fanelli conducted a systematic survey in 2009 and found that at least 72 percent of researchers were willing to somehow distort their research results.12 On top of that, research was also replete with unintentional calculation mistakes and other errors. An article in Nature rightly called it âa tragedy of errors.â13
256 12.â â Daniele Fanelli, âHow Many Scientists Fabricate and Falsify Research? A Systematic Review and Meta-analysis of Survey Data,â Plos One 4, no. 5 (2009): e5738, https://doi.org/10.1371/journal.pone.0005738
257 13.â â Mona Baker and Dan Penny, âIs There a Reproducibility Crisis?â Nature 533 (May 26, 2016): 452â54.
27 All of this translated into a problem of replicability of scientific findings. To put it simply, this means that the results of scientific experiments were not stable. When several researchers performed the same experiment, they came to different findings. For example, in economics research, replication failed about 50 percent of the time,14 in cancer research about 60 percent of the time,15 and in biomedical research no less than 85 percent of the time.16 The quality of research was so atrocious that the world-renowned statistician John Ioannidis published an article bluntly entitled âWhy Most Published Research Findings Are False.â17
257 14.â â C. Glenn Begley and Lee M. Ellis, âDrug Development: Raise Standards for Preclinical Cancer Research,â Nature 483 (March 2012): 531â33, https://doi.org/10.1038/483531a.
257 15.â â Andrew Chang and Phillip Li, âIs Economics Research Replicable? Sixty Published Papers from Thirteen Journals Say âUsually Notâ,â Finance and Economics Discussion Series 2015-083 (September 2015): http://dx.doi.org/10.17016/FEDS.2015.083, retrieved from https://www.federalreserve.gov/econresdata/feds/2015/files/2015083pap.pdf.
257 16.â â C. Glenn Begley and John P. Ioannidis, âReproducibility in Science: Improving the Standard for Basic and Preclinical Research,â Circulation Research 116, no. 1 (January 2015): 116â26, https://doi.org/10.1161/CIRCRESAHA.114.303819.
257 17.â â John P. Ioannidis, âWhy Most Published Research Findings Are False,â PLoS Medicine 2 (August 2005): e124, https://doi.org/10.1371/journal.pmed.0020124.
28 The replication crisis does not simply indicate a lack of seriousness and scrupulousness in research. It first and foremost points to a fundamental epistemological crisisâa crisis of the way in which science is conducted. Our interpretation of objectivity is wrong, excessively based on the idea that numbers are the preferred approach to facts. If we look at the scientific fields with the worst replicability outcomes, it becomes clear that the measurability of phenomena plays a significant role. In chemistry and physics, for example, it wasnât that bad. However, in psychology and medicine, the situation is wretched. In those fields, researchers assess extremely complex and dynamic phenomenaâthe physical and psychological functioning of human beings. Such âobjectsâ are, in essence, only measurable to a very limited extent, as they cannot be reduced to unidimensional characteristics (see chapter 4). And yet, all too often, we see desperate attempts to mold them into data.
29 In both medicine and psychology, measurement is usually done on the basis of tests that result in numerical scores. These figures give the impression of being objective; however, this needs some perspective. Studies into so-called âcross-method agreementâ start from a question that is as simple as it is interesting: If you measure the same âobjectâ using different measurement methods, to what extent will the results coincide? If the measurement methods are accurate, the results should be virtually identical. However, this is not the case. Not even close. In psychology, for example, the correlation between the results obtained by different measurement methods rarely exceed 0.45. This, of course, is an abstract number, which is why I like to give a concrete example in my university lectures. Imagine you are building a house and a carpenter comes to take measurements for eight windows. He uses three different tools on each window: a folding rule, a tape measure, and a laser measure. If the carpenterâs measurements are as inadequate as a psychologistâs, he would report the following results (see table 1.1).
31 With the folding rule, the carpenter concludes that window 1 is 180 cm wide; with the tape measure, the same window is 130 cm wide; and with the laser measure, it is 60 cm wide. It is the same scenario with the second window: The folding rule shows that window 2 is 100 cm wide, the tape measure shows that it is 200 cm wide, and the laser measure shows that it is 150 cm wide. The correlation among all sets of the three measurements is 0.45.
31 .18