Saturday, July 17, 2010

What we can measure sometimes misleads us.

Over the course of the past century (plus a bit), a large number of tools have been added to the scientific box for measurement of things chemical and physical. In medicine, we have seen the capability for blood analysis go from merely testing for type compatibility to being able to precisely measure the levels of a number of hormones, antibodies, proteins, lipids, and other nominally-important substances. As each new measurement tool has been added, the importance of some branch of the medical art has risen with it, and new guidelines for treatment of various conditions (or their lack) have been proposed and adopted. One would assume that this is a good thing, but in retrospect, the evidence says that's not always the case.

The problem is that while we may know what we're measuring, we may not know what the importance of that measurement really is. The history of science is littered with the debris of discredited theories about how something worked, as based on the knowledge of the day, and that knowledge was usually based on the measurement techniques available at the time. If that history really teaches us anything in this regard, it's to be cautious about overgeneralizing the importance of what we can measure; it always need to be evaluated in the light of whether what we think we know matches up with what we can prove.

Take, for example, the levels of certain minerals in drinking water. Every passing decade has allowed us to add both to the number of substances we could identify in ever-smaller quantities, and to the precision with which we could measure them. The effects of high levels of these substances has often been well known, and as the capacity to detect them at low levels has improved, we've often seen alarms being raised about their presence in water around the world. In some instances, it has been possible to demonstrate that these minute levels are new developments traceable to contaminated water from mine tailings or other human activity...but all too often, it's been discovered that these low levels of supposedly-deadly elements have been present all along with no apparent ill effects. Our ability to finally measure the "contaminant" merely added to our knowledge of the natural world's composition; the misinterpretation of that measurement, however, has often led to panic.

As a second example, look at that much-heard word from the realm of nutrition, the "calorie." Measurement of the caloric content of foods is a tool that has been around for over a century, but the ability to make sense of what it was telling us has taken a lot longer to be developed...and still hasn't been fully recognized. As often happens, when the method of measurement of dietary calories was devised, the ability to perform the measurement itself caused the importance of the information to be overstated. The possession of a new tool tends to have this effect; that which can be measured is a source of certainty, therefore it must be important. Indeed, it even engendered a fundamental misconception, that "calories are calories" and that the body could not tell the difference between caloric input from any source. This misconception has hamstringed the efforts of those who are trying to reshape our nutritional guides to correctly reflect the actual state of our understanding of the subject, which is far more complex.

For a third example, consider serum cholesterol, once touted as the absolute predictor of heart health hazards. When the ability to measure it was developed, the medical community raced to embrace the test, and drug companies began looking for substances which would act to reduce the measurement to "safe" levels. A lot of money was spent developing cholesterol-lowering drugs before enough analysis of the measurements (and correlation of them to real-world results) revealed that cholesterol levels were not as important as first believed; instead, there were two other markers (of a related nature) which were shown to be more important, and the drug companies began chasing after ways to chemically alter those as well. Unfortunately, that, too, is proving to have been the wrong approach, as expanded understanding of the interlocking nature of the actions of several bodily systems has demonstrated that once again, the item being measured was only an indicator of a different problem, and trying to "fix" the "abnormality" with a drug merely masked the effect of a different, more fundamental problem.

In each case, the ability to measure something has caused a race to find ways to change the results, "treating" an "abnormality" which was thought to be the cause of a specific condition when in fact it was itself only a symptom. The same error has been made for a variety of other measurable chemical levels; we have drugs to lower and raise various levels of things that exist normally at varying concentrations in our tissues, but we often don't actually know why the levels are out of the "normal" range, nor do we necessarily always know what the actual causative underlying problem really is. Furthermore, sometimes we don't know precisely how a drug operates to achieve the changes in the measurable item's level; more than one pharmaceutical has been discovered to be doing more harm than good in attempting to treat various maladies.

Medicine is not the only field in which incomplete-toolkit-based misconceptions are seen, though it may well be the deadliest. The decades of the 1960s, 70s and 80s saw many engineering tasks moved from the practical and physical design methods of the 1950s and prior to the entirely math-based, computer-aided methods that dominate the field today. Computer simulations of stress on a structure were measurable more inexpensively and immediately than could be achieved with prototypes and physical gauges, so they were embraced wholeheartedly in the quest to get better products into production faster. Unfortunately, the shift began before our understanding of the nuances and complexities of system stresses had been adequately developed, and as a result, many computer-designed products of the 1970s were, to be blunt, completely rubbish.

New tools for measurement of the physical world come into being every day; we're a curious bunch, and we like to know what happens when we poke things with a metaphorical (and sometimes literal) sharp stick. The problem is that we don't always know whether the information we're collecting means what we think it does.

No comments:

Post a Comment