It is hard enough trying to work out what the concept of truth can mean in any given context. In fact it is almost impossible even to reach a cautious agreement on what it might, possibly, look like. It is harder still to know when you have found it, or something close to it.
We tend to think that we know truth when we see it, but this is a nonsense that we use to justify our prejudices or to convince ourselves that we understand far more about the world around us than we do. This latter is probably a necessary psychological defence against complete emotional and cognitive collapse (see Total Perspective Vortex). We don't need to know very much, but we do need to think we know.
One thing we think we know is what we are told by competent experts who have carefully conducted series of experiments, made observations, collated data, tentatively advanced an idea based strictly on the confirmed data and limited by the properly identified limits on the precision of the data itself, and had those ideas, with whatever limitations, caveats, contextual provisos may have been necessary, accepted by others equally expert and equally demanding.
There comes a point in this process where we decide that something that might be worthy of the name of truth, suitably qualified by context, has been arrived at. When this happens, those who wish to build upon it, discover new truths related to it, build an argument upon, or merely to feel that they know something about it, accept it as learned truth, and don't trry to replicate it from first princilples before using it in for any other purpose.
It would be impossible to advance in almost any field of study (mathematics is a possible exception), there would be no technological development at all. It has to be that way. But it has some consequences.
(There will now be a slight pause while I thank Blogger for swallowing the second half of this post: ME CAGO EN BLOGGER Y EN SU PUTA MADRE: thank you for your patience.)
Scientists are human, they have mental and physical weaknesses, they have ego and ambition, they have political and moral beliefs, they have intellectual and ethical blind spots, all of which can affect the way they work and how they interpret the data. In spite of this, good science gets done and truths are made known.
As one more quite unnecessary preliminary point, you should know that cranial volume is measured (still, I believe) by pouring in grain or shot or some such thing, then pouring it out again into a measuring vessel. You don't use water because it could alter the surface structure, or be absorbed or slip through cracks. You don't use one of those multi-directional lasers they have on CSI which are so intelligent they not only measure highly irregular spaces instantaneously, they even know exactly what you want to measure before you do. Despite all these advantages, these devices are not used because, I think, they don't exist.
As a case in point, and only because I happen to have read about it today, I offer you the story of Samuel Morton, professor of anatomy at the University of Pennsylvania in the mid-19thC. Professor Morton decided one day to measure the cranial volume of a large number of skulls from different tribal and racial groups, to see whether the differences in character and intellect he had observed were expressed in differences in brain size. He went to the grain store, he measured, he analysed and he concluded that there were observable differences in brain size between populations. This was the accepted position in the field for decades, helped by the fact that it corresponded closely with what other workers expected and wanted to believe.
Then, in 1978, Stephen Jay Gould reanalysed the data and came to the conclusion that Professor Morton had been rather naughty, and had, consciously or otherwise, both taken his measurements and analysed his data in a way that tended to favour the interpretation he expected to find. He explained this at great and persuasive length, and it became the accepted position in the field for decades that Prof Morton's data could not be interpreted in the way he had interpreted it. This acceptance was helped by the fact that it corresponded closely to what other workers expected and wanted to believe.
Now, Jason Lewis et al have published a paper in which they reanalyse the measurements, analysis and interpretations of both Morton and Gould, and suggest Gould has been a much naughtier boy than Morton. John Hawks discusses the arguments intelligibly and coherently, which is the only reason I've understood them, so I suggest you pop over there for further illumination.
So Lewis has shown that something we thought we knew to be right about how something else that was once thought to be right was in fact wrong, is in fact wrong, for the very same reasons that what was once thought to be right and now may in fact be right after all, was later thought to be wrong. This sort of thing happens all the time. The world does not stop turning. We continue to believe what we want, because to believe is more important than to know. It allows us to function.
Wade-Giles Romanization and Chinese food
1 hour ago