The Logic and Illogic of “Occam's Razor”
or “How Occam Helped Wreck Our Environment”
by Russell Johnston, Victoria, BC, Canada
email: (add seventy to the 103) was103 at gmail dot com
first draft 2005, first posted/published March19, last revised March 22, 2006
Due to the extreme topicality of Shaidurov's new hypothesis regarding global warming, discussed late in the article, I have decided to publish a large excerpt of this article, still a quite rough draft. This essay regarding began as part of a larger work regarding causal factors for chronic illnesses, a project that's still underway. A more complete version should replace it, here, soon. In particular, the section “When Good Rules Go Bad: Occam go wrong”, is very much abbreviated here.
Formulations of Occam's Razor:
A reformulation of Occam's Razor
The Razor as a Rule of Thumb, NASCAR, etc
Occam and the Environment
Occam's Razor: a truth not about nature but ourselves?
When good rules go bad: Occam gone wrong
Introduction: a quick history of Occam's Razor
The beginnings of the scientific principle now known as “Occam's Razor” or “The Principle of Simplicity” can be traced back to Aristotle's maxim that “the more perfect a nature is the fewer means it requires for its operation." Occam, who was imprisoned and excommunicated for his belief in extreme poverty in 1323, famously favored simplicity, in theological matters. However his name may only have been applied to the scientific Principle of Simplicity in Victorian times by Sir William Rowan Hamilton (1805–1865). [http://en.wikipedia.org/wiki/Occams_Razor] The word “razor” probably refers to the medieval use of razors to scrape excess ink off parchment in order to correct errors while writing. [http://www.royalinstitutephilosophy.org/think/article.php?num=18] So a more modern translation might be called “Occam's Theory Eraser”.
Occam's Principle of Simplicity itself, as applied to medieval theology and philosophy, was not invented by Occam. It was common before his time but Occam became renowned for employing it in disputes concerning Christian thought and law. In particular, he used it to decide whether Angels had what to us would be ESP [Book II of Occam's Commentary on the Sentences of Peter Abelard]. [http://skepdic.com/occam.html] On this lately neglected topic, Occam argued that higher beings should be simpler than ourselves, and therefore have fewer but more impressive senses than mere human beings do – such as just one sense. This is of course, not something observed in nature – higher animals tend to have more sensory organs than very simple creatures, not fewer. In his argument he explicitly followed the earlier Aristotelian view that "the more perfect a nature is the fewer means it requires for its operation" rather than a more general idea that all of nature uniformly prefers simplicity.
We may, therefore, be excused if we pass on quickly from any focus on William of Occam's particular discussions to the application of a similar principle to scientific debates. This has a long history, and it would be useful to attempt to state the principle clearly before analyzing it.
Formulations of Occam's Razor:
Occam's Razor/The Principle of Simplicity, or something like it, has been stated in more than one way.
Citing those with the most definite historical provenance first:
1a) “We may assume the superiority ceteris paribus of the demonstration which derives from fewer postulates or hypotheses.” - Aristotle, Posterior Analytics, transl. McKeon, [1963, p. 150]. [http://plato.stanford.edu/entries/simplicity/]
(As we will discuss, this may be only a principle of logic or etiquette for proofs, however, and not the more modern principle.)
1b)"the more perfect a nature is the fewer means it requires for its operation" – Aristotle
2) "Pluralitas non est ponenda sine neccesitate" or "plurality should not be posited without necessity." (a quote from Book II of Occam's Commentary on the Sentences of Peter Abelard) [http://skepdic.com/occam.html]
3) “If a thing can be done adequately by means of one, it is superfluous to do it by means of several; for we observe that nature does not employ two instruments where one suffices.” - Aquinas, T. (1945) Basic Writings of St. Thomas Aquinas, trans. A.C. Pegis, New York: Random House, p. 129.
4) “One should not increase, beyond what is necessary, the number of entities required to explain anything.” which dates from 1639, from John Ponce of Cork. [http://en.wikipedia.org/wiki/Occams_Razor]
5) “Nature does not multiply things unnecessarily; that she makes use of the easiest and simplest means for producing her effects; that she does nothing in vain, and the like” - Galileo, while comparing the Ptolemaic and Copernican theories of solar system dynamics. Galileo, G. Dialogue Concerning the Two Chief World Systems, translated by Drake (1962), Berkeley, p. 397.
6a) “Rule I: We are to admit no more causes of natural things than such as are both true and sufficient to explain their appearances.” - Newton's first of three ‘Rules of Reasoning in Philosophy’ (philosophy then included the sciences), Book III of Principia Mathematica.
As well as:
6b) “Nature is pleased with simplicity, and affects not the pomp of superfluous causes” - Newton, I. (1964) The Mathematical Principles of Natural Philosophy, New York: Citadel Press, p. 398). 
(6a is admirably specific, but seems more a truth of logic than a predictive principle when compared to 6b.)
7) “[T]he grand aim of all science…is to cover the greatest possible number of empirical facts by logical deductions from the smallest possible number of hypotheses or axioms.” - Albert Einstein [Nash, L. (1963) The Nature of the Natural Sciences, Boston: Little, Brown.].
Finally, perhaps the two most common or popular formulations of the Principle:
8) “Entia non sunt multiplicanda praeter necessitatem”, or "Entities should not be multiplied beyond necessity" (probably the most common formulation.)
9) “Of two equivalent theories or explanations, all other things being equal the simpler one is to be preferred.” - this is a common formulation, but also quite possibly the most ambiguous, which is to say least specific or informative.
Two distinct principles
Our first task, as many have noted, must be to separate out two very different principles that might be being referred to above. One, of logic or grammar, suggests that if you can express a truth in two logically equivalent ways the shortest statement is to be preferred. It might be extended to say that of two logical or mathematical demonstrations of the same truth, that which is shorter or relies on the fewest premises is to be preferred. This rendering is most similar to Aristotle's first statement 1a) above and uncontroversial, but not exciting or useful to scientists. It can't in any way explain very lively debates about whether funds should be diverted from genetic studies of Multiple Sclerosis into demographic surveys that might show whether the recent disappearance of malaria from Europe might fully explain the appearance and rise of multiple sclerosis. A merely logical rule may of more use to editors of scientific journals confronted with prolix authors, but is of little use to funding agencies.
If the reader is wedded to this interpretation of the Principle of Simplicity, then obviously, they have no reason to discuss whether a similar-sounding principle that may have predictive value exists – they have rejected any more exciting possibility and preempted the discussion that follows here. Those who retain some uncertainty, or believe that a principle of more than mere economy of expression is at stake, or who are actually convinced that a principle offering at least some predictive value (given epistemological uncertainty or incompleteness) is expressed by Occam's Razor (or something very close to it) have reason to continue reading, since we will only deal with questions concerning such more substantive formulations of the Razor, or Principle of Simplicity, after this point - if only because, while there was a time when such a logical or minimal Principle of Simplicity as Newton's first statement 6a) was useful in that it removed elaborate theological ornamentations that held religious importance but no explanatory power, this has passed in most States.
Bertrand Russell called Occam's Razor "the maxim that inspires all scientific philosophizing." So he certainly appears to have believed that it expresses more than a point of logical etiquette. [Bertrand Russell, Our Knowledge of the External World, Open Court, Chicago, 1914] Newton's second statement 6b) and Einstein's statement 7) above also seem to stake out a more substantive, and perhaps even predictive principle, something we can now consider in more detail. In order to do so however, we need to be a bit more precise about the terms of such a principle.
Initial ambiguities: complexity and new entities are also desirable
To begin with the most popular formulations:
"Entities should not be multiplied beyond necessity"
“Of two equivalent theories or explanations, all other things being equal the simpler one is to be preferred.”
The terms “simple” and “entity” harbor considerable ambiguity. Worse, they may be misleading. One would suppose that any useful statement of Occam's Razor would give preference to the most elegant or productive theories – Einstein's statement certainly does this - but such theories are not necessarily “simpler”.
A theory in which A causes B is obviously simpler in a logical sense than a theory in which A causes B, C, D, E and F. Yet any reasonable or useful understanding of Occam's Razor surely must tell us to prefer the latter theory because it is “more elegant”, accomplishing more with less. The latter theory is more efficient, yes: but clearly not simpler in any ordinary, or logically formal sense of the word “simple” – and no special, technical sense for “simpler” in this context has been offered that remedies this (for example definitions involving computational complexity and minimum message length retain or reinforce this difficulty rather than curing it.) But we do want this sort of complexity. Darwin said in defense of his own theory, in an answer to Sedgwick, that this sort of complexity argued in its favor: “I cannot think a false theory would explain so many types of facts, as the theory seems to me to do.” [British Museum Mss. Eg. 3020A, quoted by Himmelfarb, p. 270] [FOOTNOTE Sedgwick replied, anonymously and sharply, that “you cannot make a rope out of air bubbles” (i.e. no matter how many) – clearly the was impressed with neither the complexity of the theory or it's premise, it all violated his moral and religious sense.]
In this regard it can be said, consistent with Einstein's formulation, that within the context of future discovery, we may reasonably hope that the more complex individual explanation above that explains more will go on to become part of a whole science that is simpler, in that it requires fewer ultimate causes and laws. However, this doesn't exhaust the scientific desire for complexity, or its benefits. We can go still further and note that it is not uncommon for scientific theories to imply previously unimagined and so far unobserved consequences (including physical particles.) It is certainly not considered disadvantageous for a theory to introduce new entities in this way in the hard sciences. In his 1974 speech against “cargo cult science” renowned Physicist Richard Feynman spoke up for this sort of complexity explicitly, stating that real scientific theories don't just fit the data, but always “make something else come out right, in addition” - that is, that they are more complex than they strictly need to be. [http://www.physics.brocku.ca/etc/cargo_cult_science.html] More recently, String Theory has been heavily criticized within Physics, including by Nobel Prize winners such as Freeman Dyson, precisely for not being complex enough in just this sense – that the theory does not yet seem to posit any new entities or phenomenon that can be sought out to confirm or reject the such theories. [“The World on a String” by Freeman Dyson - http://www.nybooks.com/articles/17094] As a clear historical example of a successful theory that posited such (downstream) new entities without proof, consider Mendeleyev's Periodic Theory of the Elements (the Periodic Table). Many of the elements which his theory said must exist in order to fill the gaps in his table; had not yet been discovered. They were new, previously unimagined entities which the previous chemical theory did not require. [FOOTNOTE Interestingly, the Periodic Theory had been discovered three years earlier by amateur chemist in England, John Newlands. His newly complex theory met a great deal mockery, particularly for the similarity it offered to musical scales – this additional complexity to the comprehensive understanding of chemistry seemed to his audience one ridiculous novelty too many, and the theory slid into obscurity. [Bill Bryson, A Short History of Nearly Everything, Anchor, 2003, p107] ]
In any case, whether previously unimagined consequences flow from a theory or not, we can surely agree that multiple and or complex evidence for a single or simple cause; and therefore, demonstrable multiple and or complex consequences flowing from a single or simple hypothesized cause are at the very least consistent with Occam's Razor. Otherwise, we fall into the ugly position of preferring, for simplicity's sake, that theories have the least confirming evidence possible; giving preference to the least elegant theories which yield the smallest possible amount of explanation. All this following from the fact that taken one at a time, such theories are, in fact, simpler. This truth, logicians would have to concede, and remains so when complexity is measured by minimum message length or computability requirements (Again, such reasoning can hold if and only if scientific explanations are considered individually, and not in the context of future theoretical discoveries and that scientific field as a whole.)
In contrast, Newton's second statement 6b) and Einstein's statement 7) of the principle we are considering show that multiple or complex consequences from single or simple causes are more than merely consistent with Occam's Razor: they are the most complete expression of the principle that “The grand aim of all science…is to cover the greatest possible number of empirical facts by logical deductions from the smallest possible number of hypotheses or axioms.”
A reformulation of Occam's Razor
It's possible to explicitly resolve (and avoid) these apparent difficulties stemming from uninterpreted uses of “simple” and “entity” with a novel formulation of Occam's Razor that is quite consistent with Newton and Einstein's but which retains what is attractive about the two popular but terse formulations 8) and 9) as well. Most quickly stated:
10a) “Prefer explanations with the greatest upstream simplicity and the greatest downstream (from the cause) complexity.”
That is, we should hope to find original causes or rules from which flow manifold, complex consequences, even along diverse causal pathways. More mathematically stated:
10b) “The greater the ratio of the number and complexity of the consequences, divided by the simplicity of the causal entities and processes proposed; the more consistent a hypothesis is with Occam's Razor.”
We can now note explicitly, then, that in one direction Occam's Razor actually favors complexity. This may yet seem counterintuitive to some readers, so perhaps it will be useful to cite one example in corroboration. The fact that a single viral invader (entity) causes all of: fever, anorexia, malaise, chilliness, headache, and muscle aches and pains (as is the case with the common cold and influenza) certainly satisfies Occam's Razor, rather than contradicting it; despite this explanation being more complex in the sense that it explains more. Still more so, recent discoveries which now attribute all these various consequences to the effects of cytokines that our bodies produce to battle the viral invader , which yields a still more complex, still more detailed explanation involving even more consequences and entities (all still downstream of the viral cause, but upstream of the consequent symptoms.) This elaboration also satisfies Occam's Razor more fully, rather than contradicting it. It may be simpler to say “a virus somehow causes various symptoms” than “in response to a virus, the following cytokine pathways are activated, eventually resulting in a range of symptoms” but this is not a kind of simplicity we should prefer – else we could need not have progressed beyond mythology.
Occasionally, thinkers, and scientists, have in fact been misled by the seemingly endless, unexplained repetition of the word “simple” in discussions of the Razor and do in fact reject complex, multiple consequences from simple causes, actually reversing the meaning of Occam's Principle, while citing it. This can be especially tempting in medicine, since causal pathways can be not merely complex, but may not be resolvable to a single causal tree (however complex), and may not even be unidirectional (where feedback and homeostasis is involved.) In such cases, merely discerning upstream from downstream is often difficult, and it can be all too easy to reject a theory or mechanism because, at first glance, it seems more complex than necessary (or just more apparently complex than the mechanism we had hoped to find) when the complexity actually indicates that more is being explained.
Indeed, some of the examples listed in the third section here “When good rules go bad: Occam gone wrong” may be seen to fall into this kind of error of interpretation (rather than the rule itself pointing in the wrong direction) – a preference for simplicity that may have led scientists to neglect the importance of explanatory power, including the derivation of hitherto unimagined consequences which later be verified.
Clearing up our streams
“Upstream” and “downstream” are common terms found for example in medical research, referring to relative places within a detailed biological pathway, or (often quite complex) chain of causation. However, since these terms are frequently used ambiguously in that context by researchers, I hope the reader will forgive the following clarification of what “upstream” or “downstream” can mean.
In general, use of these terms will indicate (although sometimes as a conscious oversimplification on the part of the authors for pedagogical purposes) a linear pathway – that is, one without complex feedback mechanisms which can make all events both upstream and downstream of each other (according to circumstance.) Often, only a very limited domain will be considered, certainly not a “closed system”, in order to allow consideration of a short pathway that is linear, without feedback loops.
Next, even a linear or unidirectional “pathway” may be a tree of cascading effects heading off in more than one direction, rather than a single line. Most critically, “A is upstream of B” may mean 1) that A is a direct cause or necessary condition of B, that is that both are part of a single line of the pathway being considered. Or, 2) it may mean that both A and B share a common cause further up a pathway that is a tree, but that A happens before B does – under normal conditions, at least, and that the chain to B is longer and more complex – a somewhat confusing usage. Only in error should it be used to mean merely 3) that A “occurs in time earlier than” - since two events both downstream of a cause may be situated along entirely different branches of a causal tree and such a usage of “upstream” in that case would not imply any causal link or explanatory link between these two events, or clarify the nature of any such link in any way. Or 4) “A is upstream of B” may mean that one or the other of 1) or 2) (hopefully not 3!) applies, but the researcher does not yet know which. Only very rarely will a medical researcher state which of these senses of “upstream” or “downstream” applies. For the purposes of this article, “downstream complexity” refers to multiple consequences of simpler entities or causes, whether those entities are regularities expressed as mathematical formula, molecules, events, precesses, etc.
Still more about simplicity
However more remains of our skeptical analysis regarding possibly ambiguities of the word “simple” - retained in our new definition. “Simple” here doesn't necessarily mean “that which seems simple to the human brain”. Just as evolution has shaped what tastes and smells pleasant or threatening to us, all aspects of our brains have been thoroughly shaped by evolution as well. As our successes and follies in approaching Artificial Intelligence show, many extraordinary complex tasks of pattern recognition seem transparently simple to human beings because our brains are well-adapted to them, such as the recognition of faces; while other tasks that are relatively simple for machines to perform, such as chess, seem inordinately complex to human brains because these are not very natural tasks for us, since we were never required to checkmate an antelope before eating it. “Simple” and “Complex” turn out to be among the most problematic words in the human language, since the common meanings of these terms reflect all the quirks of the entire detailed evolutionary pathway of our most complex organ, the brain. Perhaps with the only more difficult terms to define would be “love” or “moral”, for similar evolutionary reasons. Such are the lessons of the last century of computational theory and research.
Speaking of which, as mentioned above, one could resort to a “simplest Turing machine” definition of complexity, as is sometimes done (as “Minimum Description Length” or “Minimum Message Length”) using a computational approach [FOOTNOTE (see for example [Needham, S.L. and D.L. Dowe (2001). Message Length as an Effective Ockham's Razor in Decision Tree Induction. Proc. 8th International Workshop on Artificial Intelligence and Statistics (AI+STATS 2001), pp253-260, Key West, Florida, U.S.A., Jan. 2001. http://www.csse.monash.edu.au/~dld/Publications/2001/Needham+Dowe2001.ref] However, for the majority of scientific problems this is not yet a practical methodology – and of course, not a development or interpretation anticipated by Occam or previous generations of working scientists. It should be noted that favoring “Simple to state” explanations immediately favors any explanation most consistent with the unique presuppositions of the group, time, and place when and where the discussion takes place, which biases and assumptions have not usually proven themselves terribly useful guides to science in the past. Any simple term in ordinary language or science may hide considerable complexity – including difficulties as yet undiscovered. Human beings generally use definitions that are underdetermined, we leave many possible complexities of interpretation as bridges that we will cross if and when we come to them, and often define concepts such as “game” associationally, as occasions arise, (as Wittgenstein argued.) So using attempting to use a computational approach in any practical research context is not yet very valuable.
Similarly, “simple” should not be taken to mean “most plausible” as it too often is, however inadvertently: for this interprets “simple” as meaning perhaps “the explanation requiring the least justification given current cultural presuppositions”. That is, the simplest to defend or that which requires the shortest defence. Such an interpretation is more common than one would hope, and easily appears in the heat of intellectual battles. However, this interpretation verges on reducing the principle to “the most plausible-seeming explanation to me right now is very likely to remain the most plausible explanation for all time”, or even (to put a very fine point on it) “truth generally follows the crowd.” Obviously this would be a different principle than the Razor, and this interpretation too, is one that has proven conspicuously unhelpful during the history of science.
To ground the discussion of simplicity while we untangle some of its ramifications, consider the following example illustrating the complexities involved in discerning what is simple. No doubt it is in fact more plausible that dozens of informal groups of artists around the earth create crop circles than to suppose that a single alien spaceship creates the designs; but can we explain why we prefer the former explanation to the latter by reference to Occam's razor? (As one academic site, the Galilean Library, does in trying to unmistakably illustrate the meaning of Occam's principle. [http://www.galilean-library.org/or.htm]) The former is more plausible, of course, but we have just rejected this as an accurate rendering of “simplicity of cause.” In support of the Galilean Library, however, it is simpler to point to known entities than as yet unknown or unproven ones. Although one has to invent particular artists groups with an interest in crop drawing, one doesn't have to invent or newly introduce the existence of artists, their ability to form groups, or their sense of humor, whereas one does have to posit a hitherto unknown entity, namely an alien spaceship, in the alternate explanation. In sum, this illustration of the Razor is probably correct, but perhaps more difficult to evaluate than it might seem at first glance, unless we conflate plausibility with the very different concept of simplicity. The corollary of Occam's Razor we've arrived at here that saves the illustration could be stated as: “Don't posit a novel entity of a kind not already known to exist, if one (or even more)(upstream) entities of a kind already known to exist (whether or not the particular posited entities are known to exist) could provide a sufficient explanation instead.” This only points to a part of what Occam's Razor as a whole advises, but is wholly consistent with it. This corollary is useful, but as we shall see in historical examples, it can also tempt us to stay so far inside our “comfort zone” that we miss better explanations or even cut short our examination of the evidence. (Although I personally am not too worried about a lack of evidence in the case of crop circles, which have had plenty of excited attention.)
The Razor as a Rule of Thumb, NASCAR, etc
Occam's Razor is not a uniform law of nature – as we will see it can fail (within the context of the known evidence.) However, as will be discussed later, it can and has been argued that it does follow from such laws of nature. Even so, that it can fail makes it reasonable to consider it as something more like a rule of thumb. (Note however that the probabilistic laws of quantum mechanics are not considered “rules of thumb”, if only because such large numbers of instances are involved, so the mere presence of probabilities doesn't necessarily indicate a mere rule of thumb.)
For explanatory clarity, we might compare it to another very practical predictive rule of thumb - one I'll dub here the (NASCAR, etc.) racing driver's “Principle for Avoiding Secondary Collisions”. This well-established principle for drivers advises that if two or more race cars have collided in front of you during a race, you should always aim for the collision in order to avoid it – since Newtonian physics will likely push both/all cars away from the initial point of collision, and therefore the collision “won't be there when you get there.” Clearly, this Principle of Collisions is only probable and not an inviolable principle of nature – if only because colliding drivers' instinctive reactions as they attempt to stay inline and remain in the race will also play a part, as will any collisions that follow immediately, spin, track incline, etc. Sometimes, following the principle will result in another accident. Even so, the principle is usually the best bet available to the drivers behind the accident when faced with the choice of where they should aim their cars with a suddenly very limited amount of information about the forces, steering wheel positions and drivers' decisions up ahead of them. Because this “Principle for Avoiding Secondary Collisions” describes, within some probability, events in the world that have not yet occurred it is (to a limited extent) predictive, but it is not psychic, nor magicm, nor anything like a uniform law of nature or logic. It is a rule of thumb, although both logic and natural law can be said to play a considerable part in making it a useful principle for avoiding secondary collisions. A close consideration of cases from the history of science may convince you that this NASCAR principle/rule of thumb is actually stronger (more highly predictive) than Occam's.
When the Razor slips... the misapplication of Occam's Razor
And as examples will show, Occam's Razor has very often “been wrong” - that is, favored an explanation that would later turn out to be incorrect. It's remarkably easy to compile a very long list of instances from the history of science in which it seems to have proved false. Of course, in each of these historical cases later evidence has emerged that was inconsistent with the favored simpler explanation (else we wouldn't know that the Razor had proven false.) If, relative to incomplete evidence, Occam's Razor can be mistaken, it follows as a corollary that the Razor is no substitute, ever, for experimental proof or evidence. Unfortunately, since scientists are human beings, group thinking in science has often masked itself as the use of Occam's Razor and prevented further investigation and the accumulation of additional evidence, just as if the Razor provided a kind of proof. This often seems to have happened unconsciously, without having ever having to be stated as an argument.
Some possible examples of this sort of rigidity from medicine are ready to hand, perhaps understandably given the overwhelmingly complexity of the human body. Such as: diabetes diets ignoring fiber and differences in carbohydrate absorption that went untested for decades, harming diabetics; the certainty that sunblock would prevent skin cancer; that certainty antidepressants couldn't cause adolescent suicide (whether so or not so); the fixed notion (re hypoxia) that blood saturation of oxygen was always constant exiting the lung regardless of breath or effort. No doubt in each of these cases more than a simple resort to Occam's Razor was involved in the error – but I would argue that in each case decisions not to energetically investigate further were buttressed by the thought that since science already had the simplest explanation that fit the available scientific evidence, no great urgency to examine novel entities existed. Yet more complexity did wait to be discovered: differential reaction to carbohydrate consumption according to fiber density; something more than an obvious linear relationship between ultraviolet exposure and cancer; different kinds of reactions to antidepressants according to age (perhaps); and differences in oxygen-carrying capacity under stress.
Noting more clearly that it is only upstream simplicity that is to be preferred under Occam's Razor may encourage us to look energetically for novel downstream complexities and entities, even when we have a theory that has so far proved adequate. This might counter a common historical tendency within science, and all other human endeavors, to rest when an apparently adequate explanation exists – often using the conscious or unconscious tendency to rely on too-simplistic interpretations of Occam's Razor as a justification for this inactivity.
In sum, Occam's Razor can never decide any scientific issue, and is not intended to. It is at most a heuristic (or to be more vague, “epistemic”) device that can sometimes speed our investigations (and, less often, mislead us and prolong them.) A linear inverse law of gravity may be simpler to calculate than an inverse law according to the square of the distance, and less intuitive as well, but many observations and experiments (and not a few hours of calculation by Newton) have shown the more “complex” law of gravity to be correct, so it was necessary to accumulate observations and test hypotheses against the data, after all. A whole list of much less hypothetical examples of Occam's Razor proving to be false (always relative to the known evidence at some point in history) will be listed in the last section, “When good rules go bad: Occam gone wrong.”
Occam's Razor is always contextual
To be a bit tendentious, it follows from above points, that in any practical setting, Occam's Razor is contextual. For example, if we already know that gravity varies by an inverse square of distance, say from torsion experiments then nothing else is needed (save logic and mathematics which are already accepted by all thanks to Newton's efforts) in order to explain the existence of elliptical orbits. These are mathematically inevitable and require no additional explanation of any kind, however “simple”. This is another form of our previously stated corollary, that reference to previously known and fully-understood entities (including but not limited to known regularities or scientific laws) is to be favored. An “Intelligent Design” theory of orbits, such as Kepler actually held and had desperately tried to prove at first; then becomes entirely redundant. Therefore, however simple it may seem to invoke a singular deity, as an intelligent design theory of orbits does, it has turned out that this only introduces another entity into orbital science, and unnecessarily. Needless to say, it should be obvious from either logic or much earlier discussion here that introducing such further entities into explanations, while logically consistent with the facts, introduces further complexity. However, Kepler's context was different than ours, because the evidence available to him was different. He could not have used Occam's Razor to spare himself the effort of looking for simpler patterns in the data describing planetary movements than those his three surviving laws describe. If anything, the Razor suggested he should check for such simpler patterns first. So he had to examine of Tycho's observations and then make the necessary calculations in order to know that the movements of the planets around the sun are not regular in very simple ways (such as the perfect circles at equal distances from one another he at first hoped to find as proof of a greater intelligence that wished to make itself known thereby.) For Kepler, in his historical context, what we might call his “Theory of the Intelligent Design of Perfectly Circular Orbits as a Signal From a Higher Being” and his later “Theory of the Intelligent Design of Orbital Spacing Corresponding to Nested Simple Geometrical Objects as a Signal From a Higher Being” were arguably hypotheses to be tested and eliminated. No other mechanism by which planets could be maintained in orbit was known, so it was as good a starting point as any. Yet since Halley and Newton, the same Occam's Razor, wielded in a more modern context, tells us that hypothesizing a deity to guide the planets and the intentions of such a deity for doing so is mathematically and otherwise superfluous. In the event, in due time Kepler arrived at five laws describing planetary orbits which were far more complex than any he had hoped to find. Alas, only three proved to be true, but that's still three more laws of orbital motion than you or I have discovered.
So what Occam's Razor tells us is modified by context. It is always applied where complete evidence is unavailable, as a means of distributing experimental resources of time, money and attention; providing a fallible judgment, if only as a working hyposthesis, when all the evidence isn't in. Needless to say, an explanation that fits the evidence available at one moment in time may not fit so well a month later. Amongst multiple hypotheses that fit the evidence, Occam's Principle makes a prediction about which is most likely to become the survivor, over time. It tells us not the truth, nor necessarily a probability greater than 50% (amongst many possible hypotheses) but where best to place our bets for the moment. Therefore, it shouldn't be very surprising, and certainly isn't logically confounding, that properly applying Occam's Razor at one time favors or selects for us an explanation that later turns out to be false (perhaps too simple), since after a time it no longer fits all the available evidence. It should not even be surprising to state that, over time, Occam's Razor may switch back and forth among possible explanations as evidence accumulates, first favoring a given explanation, then not, and later again pointing to its initial choice once again. (This is similar to other cases of probability fluctuation and reversal in cases where knowledge gradually increases, say while counting Blackjack cards in Vegas.) In the end, of course, once all the possible data, and all computations upon it, are in, the best explanation is also the only one left standing, always. (Any “competitors” at this endpoint, are just logical equivalents.) In this absurd sense, Occam's Razor is always “right”: if by that one means that once all the evidence is in and only one explanation is left, why then, all the evidence is in, and only one explanation is left. Considered in any other light, Occam's Razor is a fallible predictive rule.
Occam's Razor, social approval, and consensus
Consequently, we should not use “the principle of simplicity” to justify simplifying research by restricting our betting (in funds, time, graduate students, etc) to a single hypothesis, or even to persuade us to devote most of our resources to one hypothesis. However it will help us weight our bets so that we at least favor the simplest hypotheses. This is an extremely important caveat to state, because all human history and anthropology, both within and without the scientific community, shows that human beings tend to prefer to herd along fixed channels of thought, rather than to vigorously explore many diverse conceptions or possibilities at once. Many tragic examples suggest themselves immediately from medical research, such as the fixed notion that polio was not an enteric pathogen, and the sad history of research into multiple sclerosis in which all research became fixed into a single channel, usually following a specific animal model, only to later switch entirely into another equally narrow channel, amongst many others. Generally in medical research the overwhelming majority of resources are singlemindedly devoted to only one general hypothesis, model, or accepted theory about the nature of an illness, and often into one quite narrow avenue of treatment as well – thus wasting time that is critical to patients and duplicating (or all-but-duplicating) much research. This same scientifically costly human tendency to avoid – to very strongly avoid - being a “lone fool” is evident in the history of all other fields of knowledge (and investment bubbles) as well, of course. In fact, it was precisely to counteract this tendency that the Royal Society adopted as its motto: “On the word of no-one”. The Royal Society explicitly encouraged natural philosophers in the seventeenth century to challenge current assumptions and “knowledge” - whether held either by Aristotelians or each other – in addition to the previous, classical quest to extend currently accepted knowledge and belief at its margins.
So the many modern occasions when a misunderstanding of the nature, or an overestimation of the predictive value, of Occam's Razor have even in part, helped scientists justify restricting research, are very real tragedies. Nonetheless, the blame in such cases lies with the human tendency to prefers the safety of a crowd, and not Occam's Razor; which does not compel us to follow narrow or uniform avenues of research, and does justify seeking out the most complex and diverse set of evidence we can obtain. Also, in defense of the Razor, some of the fixed notions that have so often hamstrung science and monopolizing research have not been the simplest ones, either in the past or now – merely those which somehow formed a consensus.
Which is to say that it's best for researchers to keep in mind when trying to apply Occam's Razor to practical research questions, that in most cases what's really in question isn't an abstract question of logic frozen at this moment in time, but rather which theory will eventually fit the (future) evidence best – something obviously quite difficult to ascertain.
Two ways Occam's Razor helps research
Occam's Razor has an as yet unmentioned virtue, with regard to distributing investigative resources, namely that “simpler” hypotheses are generally also simpler and cheaper to test, both because they imply more (the corollary of downstream complexity) allowing more kinds of tests, and because they may allow fewer exceptions or ambiguities, or elaborate “outs” or kludges to rescue a theory. So they can usually be expected to be a little more easily tested and discarded, allowing science to move on more quickly to the next hypothesis.
Also, by concentrating attention on hypotheses in scientific argument, Occam's razor subtly encourages hypothesis and experiment, and so assists in ruling out entities from time to time. This is a large step up from truly aimless research. For a description of what such research absent hypothesis and a search for possible simplicity or elegance, one can hardly do better than to quote from Med Hypotheses. 2005;65(3):426-32, Osteoporosis, a unitary hypothesis of collagen loss in skin and bone, By S. Shuster: “Progress in osteoporosis has been stultified by repetitive, statistic-driven studies and catechistic reviews; in the absence of concept and hypothesis research is aimless, and the trivial associations it continually reveals, has led to the cul-de-sac of multifactorialism. A return to hypothesis-led research which seeks major causal defects and the conclusive therapies that arise from them is essential.” [PMID: 15951132] By encouraging us to look for still simpler (upstream) explanations that also fit more of the facts (downstream), Occam's Razor encourages us to keep looking for still better explanations and hypotheses and ways to test them even when we have a rather reasonable-seeming explanation in hand that may be perfectly consistent with all the known evidence. This quest for ever more simplicity can push scientists to perform bolder experiments that in exchange for the obvious risk of failure or foolishness; so that they may yet uncover something astonishing or unexpected. Properly and precisely understood, Occam's Razor can be a motivation: it can justify seemingly superfluous challenges of current, and even established theories; rather than abject surrenders to any current consensus.
The misuse of Occam's Razor, to scrape away evidence
In contrast, at its worst, misapplied or applied too quickly (often unconsciously or without being explicitly mentioned) Occam's razor can cut away swathes of evidence rather than merely complex speculations or hypotheses. Voltaire, the proponent of Newton, “mocked his contemporary Buffon for supposing that fossils could actually have originated on the top of mountains; he thought it more plausible that the shells had been left there by pilgrims returning from the East.” [p111 Darwin and the Darwinian Revolution by Gertrude Himmelfarb, The Norton Library, large paperback, 1968.] Since we already know that travelers litter, this is a certainly a parsimonious explanation. It supposes no new kinds of entities, fulfilling our corollary, and no rare events. Yet by resorting to it, without first obtaining detailed knowledge of the real extent of the shells found, and their position inside layers actually extending into the cliffs, Voltaire cut away critical evidence, without knowing it. The extent of the evidence can been seen today in a famous outcrop in a low cliff at Langton Herring, South Dorset, England which exposes a no less than ten-foot thick strata within the earth solidly composed of shells of the Jurassic oyster, Ostrea acuminata, extending far into the earth: something that the passage of travelers cannot possibly explain.[see Simon Winchester, The Map that Changed the World, Perennial, New York, large paper, p179] Unfortunately, the proposition that shells could have been naturally deposited on what are now mountains seemed so preposterous to Voltaire as to make further detailed investigation unnecessary – even somehow unscientific. Quickly countering with Occam's razor and a simpler explanation – any simpler explanation - seemed sufficient to him at the time.
Similarly, the Reverend George Young in his 1840 book “Scriptural Geology” explained away the apparent development of species within the fossil record, which so many had noted before Darwin, “as if the Creator's skill had improved by practice” quite simply: “But for this strange idea there is no foundation: creatures of the most perfect organization occur in the lower beds as well as the higher.” [see Simon Winchester, The Map that Changed the World, Perennial, New York, large paper, p113] The last sentence is true, and again, his explanation is parsimonious, as the assertion that there is nothing to be explained in the first place must always be. But again it doesn't address the fine details of the accumulating evidence, or even carefully examine them, it simply resists any need to gather and examine any such evidence. In truth, there was progression visible within the fossil record, in addition to the early complexity that Young had observed. Young's counterargument ignored a great deal of evidence in favor of a simpler (and more comfortable) explanation that sidestepped the issue. Such highly motivated decision making is common for human beings of course (it is most easily observed within spouses, rather than oneself) and it is just this tendency that science, at least as understood by the original Royal Society was designed to attack.
It may be worth noting that even Darwin, on Galapagos, was guilty of rather thoughtlessly oversimplifying, even when gathering data. Darwin partially mixed his specimens from different islands without much concern, although he was told at some point by the English resident and overseer that each island had unique creatures. Even after hearing this, he collected only one of each species from one or another island rather than taking into consideration the more complex possibility that each island might have slightly different species or subspecies. Doubtless, this simplifying assumption was not consciously identified as such by Darwin at the time, or explicitly attributed by him to a Principle of Simplicity. Yet it was a misapplication of Occam's Razor before the evidence was in – perhaps made easier by the fact that he was not a biologist, as noted earlier (if we must make excuses for him.)
Years later, after much more consideration, this variation between species on the islands would be a cornerstone of his work on evolution. At the time, however, an overly simple explanation of the nature of species distributed on the islands prevented him even from gathering the best evidence, something which we are told caused him considerable later regret. Additionally, for many years, Darwin also shared with many other naturalists some simplifying (and even facile) assumptions about the unique features of the creatures of the Galapagos: attributing physical differences in species such as rats to the weather and conditions they endured after birth, and attributing the absurd fearlessness of the creatures to their unfamiliarity with man, even when considering species that were had been frequently hunted for generations since the arrival of European ships.[p115 Darwin and the Darwinian Revolution by Gertrude Himmelfarb, The Norton Library, large paperback, 1968.] These missteps are human. Some of them may be unavoidable, since not every logically possible source of possible complexity that might be observed can be allowed for during one career or while simultaneously gathering various data: we may always have to make some simplifying assumptions without proof, in order to get any single experiment done – one cannot control for every possible source of complexity in the data every time out, some things will have to wait for the next experiment. However, science can only progress when these assumptions that underly data gathering, both conscious and unconscious, both idiosyncratic and culturally shared, are eventually likely to be examined in their turn (ideally without someone losing the possibility of tenure.) Misunderstanding or misusing Occam's Razor may make such re-examination far less likely, as the next section discusses with concrete examples.
Occam and the Environment
A more recent over-reliance on Occam's principle may now have devastating implications for our planet, and begs to be mentioned at this point. The early satellite data showing very low seasonal ozone levels over the Antarctic was thrown out by NASA. Its technicians simply recalibrated the program that was recording the data so as to eliminate all very low readings arbitrarily, neither recording them nor passing them on. The technicians judged that the simplest explanation of these unexpected observations of what would come to be called an “ozone hole” was an instrument malfunction in the satellite, since instrument failures in space were not just a known entity but common one. No further investigation of this critical data was performed for many years, during which time, scientists who carefully examined the now falsified satellite data were consistently being reassured that the ozone layer was not thinning. The bowlderized data greatly delayed any chance to respond to the problem, or realize how profoundly industrial activity was affecting climate in general. No doubt, Occam's Razor was not the only influence shaping this decision – budget, haste, mindless habit, and even some of the management problems that NASA has become famous for may all may have played a part – but the Razor had a highly significant role, too, in scraping away this vital data.
Similarly, earlier in the last century an all too simple simple calculation seemed to show that the ocean would easily absorb all the CO2 that mankind's industries could throw at it. The history of the analysis is of the greenhouse effect starts still further back than that, and involved several issues. Throughout, however, one bedrock objection to any hypothesis of a greenhouse effect from industrial emissions remained constant over more than a century. This was the certainty that the oceans already held fifty times more CO2 than the atmosphere and could absorb more. That conclusion was consistent with the little known evidence and implied no new entities of any kind of course. But time would show that this reasoning falsely assumes the simplest case, i.e. that CO2 absorbed at the surface would mix into deeper levels reasonably quickly. Instead it turns out that the hydrodynamics of oceans and gas absorption/diffusion within large masses of liquids are a bit more complex than was hastily supposed. But none of this was subjected to further tests or data-gathering. (Temperature trends went examined until Callendar, 1931, and work that was therefore easily ignored given general scientific certainty.) In fact CO2 absorbed at the surface of the ocean tends to stay there. Mixing, or “turnover” - the time it takes to expose the whole of the ocean to the surface - takes several hundred years. [http://www.aip.org/history/climate/co2.htm#N_31_] More significantly, the ocean's surface can only absorb ten percent of the expected value thanks to a buffering acid homeostasis there [http://www.aip.org/history/climate/Revelle.htm] and further absorption will slow as the surface layer becomes saturated.
Most unfortunately, the reassuring but far too simple hypothesis that the oceans would easily absorb industrial CO2 emissions became accepted as established fact and played a continuing part in cutting short any further examination of the consequences of industrialization upon the world's temperature for many decades; with consequences we are only now beginning to come to terms with. [http://www.aip.org/history/climate/co2.htm] The incorrect theory was admirably simple: the calculations for it don't even require the back of an envelope and it supposed nothing but what was then known, while fitting any data they had. As a result, the support Occam's Razor lent (for those many decades) to the hypothesis that industrialization could not produce a greenhouse effect helped to prevent even the most basic search for further evidence: such as obtaining longitudinal records of temperature trends, or studies of ocean CO2 absorption or mixing times, or any understanding of the implication of those results as they came in, later on. Perhaps the alarm might have been sounded sooner if it had been considered an axiom of science that the Razor is far from infallible, or even usually wrong (while still being the best bet available amongst many) and can never be used in any way as a substitute for evidence, or as a reason not to trouble to gather more evidence. Perhaps.
These last two examples of Occam's Razor gone wrong come from within a single narrow topic, albeit arguably the largest problem facing humanity today. No doubt these and later examples incorporate more than one human fault. However, Occam's Razor played its part each time. In the last two cases not only did the least complex theory that fit the known evidence point in the wrong direction - providing false reassurance - but in each case the Razor helped justify a refusal to gather further evidence, despite the high stakes involved. This danger is so acute because it is always most tempting to reach for Occam's Razor at the very beginning of any scientific dispute - that is, when there is the least amount of established fact available that might be introduced into the argument. Yet this is precisely when the Razor is least reliable.
The newest theory of global warming, and Occam's Razor
It is not impossible that even in this last case that Occam's Razor will change the direction it's pointing in at least one more time. Surprising changes to the available data on temperature are certainly very unlikely, and a new hypothesis that fits the data better may seem unimaginable. However Vladimir Shaidurov has had sufficient imagination to throw one more hypothesis into the mix just recently. His interesting new paper relies only on accepted temperature data. A good jounalistic description of the theory can be found at [http://news.mongabay.com/2006/0313-vapor.html] and the more daunting original paper is at [http://arxiv.org/PS_cache/physics/pdf/0510/0510042.pdf]. This data shows average temperatures around the earth trending sharply upward only after the first decade of last century. Atmospheric water vapor is the most important greenhouse gas (not CO2) , but Shaidurov also points to the cooling effect of repeatedly crystallizing water vapor in the middle part of the mesosphere forming what are known as “silver clouds” more than 50km above the earth. These clouds reflect some sunlight back into space before it can reach earth. He hypothesizes that a large amount of water vapor was removed from this part of the upper atmosphere by the spectacular, 15 megaton explosion of the Tungus meteorite 10 above Siberia in 1908 “stirring the atmosphere” and heating it, and that this “restructuring” initiated the current process of global warming by thinning the layer of “silver clouds”. 
While the Tungus meteorite itself doesn't constitute a novel entity, of course; any large and self-sustaining effect on world temperature from such an event would be new to science. So that counts as a novel entity. The “silver clouds” are not new entities, but he may credit them with a stronger cooling effect than previous estimates, and his hypothesis does posit that they were more extensive before 1908 than today. For good measure Shaidurov also introduces another entity into his explanation (which in fairness may actually be necessary for any explanation that fits the available data [http://www.grida.no/climate/ipcc_tar/slides/large/05.16.jpg]) – namely a strong cooling countertrend from the end of World War II through the late Nineteen Seventies from dust and water vapor kicked up into the high atmosphere by above-ground nuclear testing.  The strength of that cooling trend would be at least partly due to his posited strong cooling effect from silver clouds: therefore, although it may seem confusing to say so, it would also be a bit of downstream complexity that actually confirms his theory, if this effect of nuclear testing were shown to be true.
In exchange for these two or three new entities however, he has given us an hypothesis fits the temperature data remarkably well and in so doing explains why the earth actually cooled for nearly a century after the Industrialization Revolution (continued a long-standing trend), according to the accepted data. As a bonus, (this is my addition and Shaidurov is not responsible for any error here) his hypothesis then also explains the obvious earlier, and much longer downward trend to boot. Connecting the dots, such gradual cooling would then be the general case historically, in between random collisions with large light meteorites. In other words, the earth was still cooling from the effects of a previous such collision when the Tungus meteorite hit. (It's estimated that similar explosions may happen as often as every couple of centuries - generally over water, unobserved.[http://www.psi.edu/projects/siberia/siberia.html])
Fairness insists I also list other evidence that's downstream of his theory, if it's true – but these points don't necessarily go unexplained by the industrial theory of global warming either.
The “silver cloud” hypothesis is also consistent with data that show that the amount of sunlight reaching the surface of the earth is actually gradually declining recently. This would be consistent with the very gradual restoration of “normal” water vapor levels in the mesosphere after a collision.
Latest versions of the standard theory show that the temperature rise before 1940 cannot be attributed to the smaller amount of CO2 emitted up to that time. The “silver cloud” or meteorite hypothesis explains this earlier rise, and the fact that the slope of that earlier rise (1910 to 1945) is pretty much the same as the current rise, despite the fact that emissions of CO2, methane, etc., have become far greater.
It may explain why the current rise is seemingly steady, not clearly accelerating, from the late 1970s to the present, although industrial emissions have increased. (Estimates of the rate of increase have risen recently as it has become clearer that warming wasn't evident from 1945 to 1980, there is no obvious curve or parabolic rise in recent decades. However, expected yearly variations in temperature within this short period obscure the matter.
It explains why ice cores show that in the distant past, periods of warming began before carbon dioxide levels rose, despite the fact that the standard theory supposes that CO2 is what drives significant climate change, in pre-history, and today.
In contrast, while the rise in global temperature over the last century has three stages: an early rise from 1910-1945, a plateau from 1945 to the late 1970s, and a rise since. All three components are quite unlike what we see in the several previous centuries, namely a steady fall in average temperatures. The standard theory can only explain the latest stage, since 1980. For it, the earlier rise, and the plateau, are extra, unexplained entities, as is the earlier gentle downward slope in temperature. Not exactly ideal.
Of course, that his new hypothesis fits the most obvious chunk of data we have very well doesn't make Shaidurov right, and that it also has explanations for other phenomenon doesn't prove it correct, either. I have no expertise in the area with which to render a any judgement with the net result that I'm still scared of industrialization but am also more frightened of meteorites than I was before. The precautionary principle suggest to me that we should be very careful of abandoning the standard theory prematurely.
He does suggest an experiment that would also be a cure for global warming if his hypothesis is true: rocketing payloads of hydrogen up to the mesosphere to replace the missing water vapor that will then form more “silver clouds”. If part of such a test is conducted and shows a sufficiently strong effect, or if other tests can be devised that would be decisive, then perhaps with the help of that additional data, Occam's Razor will swing 'round once again and declare that the effects of industrialization aren't causing global warming. Perhaps.
I hope, given the previously cited arguments and examples, that it goes without saying that we shouldn't allow self-satisfaction, or any conviction that Occam's Razor still most clearly supports the Industrialization Theory of Global Warming, to forestall us from gathering more data relevant to Shaidurov's Hypothesis. At worst we would learn more about a part of the atmosphere now sometimes referred to as the ignorosphere, since we know so little about it. But in the meanwhile I also hope that mere prudence, as well as the separate peril of ocean acidification by excess CO2, will motivate us to reduce greenhouse gas emissions as quickly and energetically as is possible.
Before we leave the topic of the earth's environment, one more imaginable consequence of Shaidurov's Hypothesis (the “Silver Cloud Theory of Global Warming”?) deserves to be raised. It suggests the distressing possibility that a too=quick resort to Occam's Razor might all too easily have produced vastly accelerating global warming by now. (Again this is another addition by myself, and Shaidurov is not responsible for any errors here, either.) For Occam's Razor then suggested that the mesosphere would not be dramatically more vulnerable to a nuclear explosion than any other part of the atmosphere. No-one then had any reason to suppose such a complexity or extra entity/vulnerability. Yet had that assumption, directly attributable to Occam's Razor, ever been used to help justify the test of a large nuclear bomb at very high altitude, we might be in the midst of a population crash now, if the “silver cloud” theory is right.
Such a test wasn't entirely unlikely, either. A live test of the Soviet Antiballistic Missle system, which employed nuclear bombs to detonate incoming nuclear warheads high up in the atmosphere, might have sufficed to for a doomsday scenario. The largest bomb ever tested, Tsar Bomba, in 1961; was several times as large as the Tungus event. It was dropped at low altitude but the mushroom cloud reached 18km. Typical nuclear bombs are more like 1.5 megatons, but had such a bomb ever been given a very high altitude test directly in the mesospheric layer of the atmosphere, 50 or 80km up, whether as a demonstration of strength or for any other reason; one can guess that it might have damaged the “silver cloud” layer far more severely than the Tungus event (only 10km up), causing a much steeper self-sustaining rise in global temperatures if the “silver cloud” theory of global warming is right.
Perhaps we may all have to start thinking far more kindly of President Nixon and Leonid Brezhnev; who signed the ABM treaty in 1972 severely limiting the development of such weapons, as well as President Johnson, who initially proposed such a treaty in 1967.
It is sobering enough to count up the number of times the American scientists who built the first atomic bomb employed, amongst many other shortcuts, Occam's Razor to help predict the effects of the bomb, in particular, whether it might be astronomically more powerful than they imagined. In one very important respect they are known to have erred on the side of simplicity and Occam's Razor – judging that lingering radiation and the effects of radiation from nuclear explosions on the health of survivors would be minimal.
As Simple as our imagination
“The simplest explanation consistent with all the available evidence.” Now that we can make a distinction between upstream and downstream components in a causal system and therefore, in explanations, another refinement is necessary. Do we mean A) “the simplest explanation that has yet been imagined” or do we mean B) “the simplest explanation that could possibly exist or be imagined”? There may be a very large difference between the two. Even if we strengthen A (arguably unrealistically) by changing it to read “the simplest explanation that could possibly be imagined with contemporary evidence, knowledge, and concepts” there is still a large difference between these two interpretations of Occam's Razor, and to maintain it as a useful investigative principle, we will have to choose a single meaning.
To put the problem slightly differently, “all the available evidence” may include both upstream and downstream evidence – not just the set of consequences or events that any theory must fit, but also the ontological or conceptual known entities from which explanations can be built (or which are thought to be very common or very uncommon.) In which case as more evidence becomes available, including knowledge that may seem far removed or remote from the problem at hand, more possible explanations may be imagined (or now imagined to be other than wildly improbable) than before. As a crude example, if you haven't yet observed cosmic rays, they aren't likely to show up in your explanations of what triggers lightning strikes, and it would be strange indeed if the Ancient Greeks had put forward any explanations, simple or not, involving neutrons, strong or weak nuclear interactions, or the disintegration of isotopes.
To be perfectly specific, the simplest explanation of why the sun shines available to the ancient Greeks is not the simplest available to us. So by any measure, if we say that the simplest explanation means something like (A) the simplest explanation available at the time with contemporary knowledge and concepts; then Occam's Razor could only fail for the Greeks a whole lot of the time, and not just when trying to account for observations of distant fusion processes. So the principle of simplicity may seem a rather weak or unreliable guide if we decide on interpretation A.
Yet interpretation B is not without difficulty, either. If that's what we mean by Occam's Razor then we can never know whether it can be applied to any yet unsolved problem in science or not, since we have imperfect knowledge of unsolved problems and, necessarily, don't know what we might not know yet – that is, we can't be sure that there isn't some concept or entity that could be employed in one explanation or other, but which remains undiscovered - so we can't know whether our smorgasboard of possible hypothesis is complete or not. What we haven't yet imagined isn't necessarily more complex than what we have, since as we've seen complexity in this context refers to a ratio of upstream entities and downstream entities – more general explanations may be far more detailed but because they explain more, simpler than any explanations our ancestors could have imagined for the same results. But obviously, if we never know whether Occam's Razor can legitimately be appealed to or not, then it is not much help to active scientists at all. It is strong but usually or always unavailable.
In either case, we must cope with the necessity of making a decision (not about what is true, but about which explanation the Principle of Simplicity points to) with imperfect knowledge. If we choose A, the favored explanation will not only very often be wrong but necessarily so, since the winning ticket wasn't even part of the lottery. If we choose B, our Principle of Simplicity turns into a pious tautology saying little more than when we know everything, we'll know everything.
Here, to continue with the gambling analogy, the notion that Occam's Razor is always the best available bet yet more likely wrong than right (a notion consistent with A), rescues the principle as much as can be done. The Razor remains useful as a way of allotting resources or suggesting the first places to look for an answer (or for better evidence) while also acknowledging that any unambiguous and broadly useful formulation of the principle will often result in errors. One might well suppose that it will provide less existential comfort in an uncertain universe than it may have seemed to, previously; but if this forestalls complacency or premature certainty, that too may be a help to science.
Considering a concrete case, just today as I write, a new explanation for the “little ice age” of Medieval times has been put forward by Dr Thomas van Hoof and colleagues. It is proposed that the widespread regrowth of vegetation and reduced use of wood as fuel in the wake of the Black Death caused a relative drop in carbon dioxide levels – a reverse greenhouse effect that lasted only until the population expanded again. [Europe's chill linked to disease by Kate Ravilious http://news.bbc.co.uk/2/hi/science/nature/4755328.stm]
This new explanation is supported by new data, tracking pollen deposition at the time as a measure of land use – but not very surprising information. The idea that human activity can affect the world's climate might have been new or unexpected one hundred years ago, but it isn't now. However, the degree and reality of contemporary climate change are becoming clearer year by year, making the idea of Medieval climate changes as a result of human activity (or inactivity, in this case) more imaginable, and more probable, every year. If we take Occam's Razor to mean the strong interpretation of A, above, then what are we to say of this case?
Previous explanations included reduced solar activity, more volcanic activity or altered ocean circulation. These are all things that could have happened, but the evidence that any one of these entities did in fact occur at the time isn't clearcut. Whereas, it's certain that land use fell sharply. Whether or not this was enough to explain the 300 year long Little Ice Age isn't yet known, but we can already say that this is a simpler explanation since it doesn't suppose that any events happened during that time which aren't already known with certainty to have happened. In this sense, it posits fewer extra entities: fewer events that aren't known to have happened. In the BBC article I've cited, Dr Tim Lenton is quoted as saying that this new explanation may only a contributing factor. But if so, this would make the Black Death a component of any other explanations for this anomalous period, and so logically simpler (that is, simpler than itself plus something else.)
Occam's Razor: a truth not about nature but ourselves?
I have already mentioned that it is inaccurate to see this principle of economy, Occam's razor, simply as law describing nature. It may be more accurate to say that it tells us something about what human beings are like, when contemplating nature. As even the most cursory examination of human culture or history demonstrates amply, human beings don't just prefer certainty, we adore it – whether justified or not. It must also be admitted that we are lazy, whenever circumstances seem to permit this, or to put this more politely, we generally prefer to conserve biological resources when possible. Therefore, as a species, historically and anthropologically we have been more likely to accept, and even insist others believe in, some existing poor explanation, whether complex or incomplete, than to either remain in skeptical suspense or to undertake an exhaustive search for a better explanation which fits more facts or is the most elegant (simple upstream, complex downstream) that could possibly fit the present evidence. Rather, we settle on explanations too quickly, without looking for the very best fit. Historically, therefore, we have been more likely to err by too quickly accepting a perhaps a somewhat awkward, yet readily formulated explanation that can be cobbled together to fit the facts; rather than to invest the very considerable effort to unearth and examine many more possible explanations, one of which might actually explain much more than just the phenomenon we are considering, or be more parsimonious and yet as complete, or even more complete. Judging by past history, we tend to stop energetically looking for alternative theories (and even more evidence) too soon.
It is perhaps possible to imagine a species evolving under very different conditions which more often needed to remind itself to sometimes settle for inelegant truths, or to not remain without any preferred explanation at all, rather than prolonging endlessly a search for some ideally simple, but still undiscovered explanation. For such a species, Chatton's “principle of explanatory sufficiency”, or Kant's similar formulation: Entium varietates non temere esse minuendas “The variety of entities should not be rashly diminished” (Kant, I. The Critique of Pure Reason, transl. Kemp Smith (1950), London., p. 541) might be of much greater importance to working scientists than Occam's razor.
To speak vulgarly, Occam's razor notes that too often, when it comes to explanations and certainty, humans are rather cheap dates. The razor aims to trim this tendency, to at least some extent. We might add, however, that in balancing fit, economy, and effort; it's best that not all scientists follow precisely the same principles, or they may all end up looking in the same places for new discoveries, redundantly, rather than “spreading out” a little, trying out different ideas. Ideally, applying Occam's razor, because it counters a pervasive human tendency that scientists, too, have, will actually mix things up a bit, introducing more diversity into scientific exploration: sending at least some scientists a bit further afield than they might otherwise go as they look for theories which either explain more, or do the job more elegantly than any yet found. However if a very precise formula of Occam's razor were rigidly applied, putting scientists into lockstep, it would actually slow the process of discovery. Certainly specific, rigidly applied interpretations of the principle have been used to enforce a premature uniformity of opinion within many narrow scientific fields, but human beings' general intolerance of diverse opinion, our wish to see our opinions reflected by others, and the ubiquity of peer-review prior to research or publication have caused nearly every and any available principle or justification to be used for such purposes. Occam's razor is probably not much more easily abused in this way than any other rule of thumb in science, any has often been used to escape the uniform biases held by particular generations of scientists.
Summarizing, the explanation that is most facile or easiest to arrive at and which roughly fits the evidence, or even fits all the current evidence available, is not usually the explanation with the greatest possible upstream simplicity AND downstream complexity. For the most part, such explanations, such as quantum electrodynamics - which expanded the application of quantum mechanics to fields as well as single particles - are not superficially obvious, or the first explanations that occur to us. Seen from this perspective, Occam's razor not a mere tool for choosing between extant theories, but an injunction to continue looking for yet more elegant or wide ranging explanations as well.
Or is the Razor about nature, after all?
Against the idea that the Razor is a fact about human beings (including our need to start with the simpler principles of nature), the following two arguments can be made suggesting that it is instead a truth about nature:
1) The first argument is that nature often prefers complexity. This is particularly true of biological organisms. For example, the proteins that DNA describes and which determine such much of what happens within our bodies often perform two or three roles. Similarly, organisms' bodies include redundancy and multiply-purposed organs very frequently. In this sense, evolution favors complexity for economic reasons. It's better design. But of course, this is only another instance of what I've termed “downstream complexity” in which many results flow from fewer causes than might be the case, so this is not just consistent with Occam's Razor but fulfills it exactly.
It must be noted, however, that in the context of a particular investigation, it may be difficult to discern this immediately – a proposed explanation may seem “more complex” precisely because more consequences flow from fewer causes. To use a specific example, which is the simpler hypothesis: the hypothesis that the biological molecule X is a precursor of another molecule that is active in the body, that it is a hormone signaling an environmental change, that it is an antioxidant, or that it is all three of these things? Perhaps counter-intuitively for some, according to Occam's Razor the answer is the last hypothesis: that the single molecule (melatonin in the case I have in mind) fulfills all three roles, is the “simplest” in the only important sense. This is because simpler causes or entities (a single type molecule in this case) are responsible for many consequences and effects.
A more elaborate form of this objection, is that some events likely stem from complex causes – for example, that traffic accidents or very unusual events, should be expected to have more complex causes than more common events. In the case of traffic accidents, for example, it is notoriously the case that three or four things going wrong – for example, rather bald tires, a distracted driver, sunset, and another driver who has fallen into the habit of not signaling a turn in proper time, combine to create an accident despite the safeguards built into the traffic system. Similarly, it is actually a law of statistics that very extreme statistical results, such as the home run totals of each of the top ten power hitters in Major League Baseball in one year, will be likely to decline in the next year. That is, that the top home run hitter probably will hit a lot of homers next year, but probably not as many. This is because many different things have to go well in order to create this most extreme result in the league – health, age, experience, cooperation of management, random drift of habits, what hitting coach is available, mood, nutrition, frequency of late nights, inter-seasonal training regime, etc. It's likely that the best hitter is so because of both skill and luck, and likely that not as much of both will occur next year as well.
Similarly, one might assert that very rare diseases which do not obviously seem to stem from any singular rare cause, would be more likely than common diseases, to result from a confluence of multiple factors which occurs only rarely. As a parallel example: a computer programmer debugging a very rare error is well advised not merely to correct one necessary condition for that error, eliminating the bug, but also to thoroughly check for other unusual conditions that were necessary for that error, and to search further in case those other rare conditions are affecting other processes as well, in equally unforseen ways. Poor input checking under very unusual, unforseen combinations of circumstances, for example, may expose a specific problem with a variable that has overflowed (wrapped) because it is too small. However the wise programmer fixes not only the obvious buffer problem but looks for more conditions that may have created the problem – in this case the variable size, training of those inputing data, and perhaps further checks as well. Because the error is quite rare it is reasonable for the programmer to assume that more than one thing had to go wrong in order to produce it. Of course the programmer is acting prudently, but he is also likely correct in assuming the reverse of Occam's Razor – that more than one cause may be at work. (This could also be seen as an instance of the idea that the Razor is contextual with time, that is that the “low hanging fruit” which is most likely to be discovered first is likely to be simpler than later discoveries, something discussed a bit further on.)
Although it is impossible to do the subject justice here, in summary, where “contested” or “adapted” systems with sophisticated safeguards exist - which must certainly include the human body and its immune system as well as the body of traffic laws and highway signs, barriers and safety features, and much of human engineering - expecting multiple or complex causes of any unusual events such as accidents (contrary to Occam's Razor) may not be unreasonable. Evolutionary competition between organisms can create over time conditions in which multiple defenses must be breached before a disease can occur, for example. Similarly, it is a rule of thumb that more than one thing has usually gone wrong in order for any given traffic accident to occur, since various safety measures and laws exist to limit the most likely kinds of errors. Only if you have missed sleep, then tailgate, check your mirror too hastily and then neglect a shoulder check while changing lanes during wet conditions at sunset, say, does an accident as opposed to a near miss, become likely. Thus traffic accidents notoriously tend to contradict Occam's Razor, as they take place within a heavily “contested” or adapted system which has had many decades to mature and invent measures that prevent accidents due to simple, common causes. (It should be noted however, that single events, if for example they are the result of human artifice and have not been encountered during evolutionary history, can also breach multiple or sophisticated defenses that have not been shaped to cope with anything similar to the new circumstance or event.)
In this light, one of the most singular episodes of an inappropriate attachment to Occam's Razor long obstructing science may fall under this previous discussion. For long decades, medical scientists were bitterly divided over whether human bodies defended themselves against invaders by using chemical molecules (e.g. Antibodies or antitoxins) or warrior cells that attacked invaders (white cells). Both sides of this interminable and wasteful debate too strongly believed in Occam's Razor in a highly-contested context where it was less appropriate than usual. Even long after both sides had amassed considerable evidence in favor of their own hypotheses, neither side was capable of imagining the truth, that both these hypotheses were true. The body gets the best results by using both. Instead, the two sides fought bitterly to discredit or discount the evidence put forward by the other side. We now know that a complex system of immunological defense, is necessary so that only foreign matter and not our own tissues are attacked by our immune system, most of the time. To accomplish this, chemicals (antibodies) are produced which identify known kinds of enemies, and other antibodies are produced which identify our own tissues to white cells to prevent attacks on our own tissue (autoimmunity.) Evolutionary struggles with other organisms have necessitated the gradual development of a very elaborate and sophisticated system indeed. Given the competitive natural conditions of evolution, such complexity might perhaps have been anticipated from the beginning, but it was not. A too simple adherence to Occam prevailed – and often does today, not taking into account the special circumstances of contested, adapted systems. Similarly, it has been said of human beings that we “never do anything for a single reason”; perhaps this too is to be expected given the complexity of the human mind, and the competitive conditions of it's evolution that helped make it so complex. It may therefore be unwise to assume that our boss's, neighbors', and coworkers' motivations are usually simple and just what they state, justifying this assumption with Occam's Razor. [FOOTNOTE The word “spouse” has been excised from the previous sentence by special request.]
2) The Second possible objection to the idea that Occam's Razor is about us is more interesting, and perhaps can't be so quickly dismissed. This is the idea that while very many universes might exist according to modern interpretations of Physics, conscious life-forms capable of understanding their surroundings would be more likely to arise in universes with more unchanging physical constants. (“Unchanging constant” is not a redundancy since recent theories proposed, falsely experiment seems to show, that some “constants”, such as the speed of light or gravitational constant, might actually be changing over time.) Evolution takes a great deal of time, and steadier local conditions may be thought to allow such gradual evolution to continue and become allow more complexity, building on previous organisms, biological mechanisms, chemistry, etc. A universe with more constants would be expected to result in steadier local conditions that help evolution along, and could usefully be said to be simpler. For example, Einstein's discovery of the constancy of the speed of light (his first theory of relativity) is not, so far as we know, necessitated by other inevitable laws of physics. So it might be that we live in a rare universe in which it is true, and that we have evolved in this universe in part due to the presence of this “extra constant” which has helped to create steady conditions more suitable to biological evolution – many universes without such a constant, or with fewer fixed constants in general, may simply go unobserved according to this notion. If so, a degree of “simplicity” in the universe, or at least constancy, would be a truth about this particular universe, but not necessarily of “all that exists” or if you like, the set of universes in general.
It has frequently been suggested that the fact that various constants have particular values in this universe may reflect such an “anthropomorphic principle”, that only in such convenient universes could intelligent observers, and therefore science exist, but I don't know if the importance of the very constancy of constants has been mentioned in this context.
When good rules go bad: Occam gone wrong
Historical examples of the failed predictive power of the Principle of Simplicity
It may be argued that at least some of these examples involve naive misinterpretations of Occam's Razor. Certainly, it can be argued that in some of these cases Occam's Razor may simply have been applied too hastily or even improperly, without seeking further data, or even as a reason to avoid such efforts. However in many other cases, I believe it's quite clear that the simplest hypothesis that fit the available evidence just turned out to be wrong, as better evidence turned up. It is sadly true that in many of these cases, including ones vital to a given science, not only was Occam taken too seriously, but was one more reason to cut off debate prematurely.
Once one takes the trouble to look carefully, there are a shocking number of examples from the history of science of these occasions when the explanation that seemed to scientists of the day to best fit the available evidence turned out later to be wrong. That is, examples where the favored theories were succeeded by more complex theories, because the evidence eventually compelled a change of theory. In many of these cases the earlier theory was not merely held to be a “best guess” or even to have high probability, but was accepted as certain, or all but certain. Anthropology shows us that in every culture, certainty and conviction are not in short supply, so this should not surprise us, the process of science doesn't prevent group-think, it eventually corrects it. Still, these examples show that if we can limit the tendency, science will accelerate.
The most compelling examples to moderns are, of course, modern examples. So we may begin with a very recently deceased piece of scientific knowledge, from medicine. As I grew up, I heard repeatedly, from all directions, the then relatively new scientific wisdom that while cold could be harmful, exposure to cold did NOT cause anyone to catch colds. This contradicted, I need hardly say, the firm and ubiquitous tradition from our less enlightened Grandmothers, that cold did just this. When I was in graduate school, a colleague of about my age from mainland China stated that cold did cause colds, and while I was not rude enough to contradict him, I did take this as a sign that China wasn't yet scientifically entirely modern.
Now, in 2005, it turns out that he was right. What I was told and what the medical establishment absolutely believed, was not only false, but based on scanty evidence – and, no doubt, an enthusiastic application of Occam's Razor, to boot. A recent empirical study has shown that for at least ten percent of the population, namely those who have caught the most colds in the past, cooling the body does sharply impair our immune response and result in colds; and make it much more likely that we will come down with a cold.  In the parlance of philosophy of science, there is at least one white crow.
Of course, if two scientific views are identical except that one does not include an extra principle (“such as cold causes colds”), then the former is simpler, by any logic – yet not necessarily true. In this case it can be argued that a clear misapplication of Occam's Razor was also part of the picture. The temptation to this extra bit of theoretical simplicity was the new discovery during the last century that viruses “caused” colds and flus. Since we now knew that viruses existed and were the cause of colds, that was the end of the discussion – cold couldn't cause colds, viruses did. This logic was held strongly enough that it may well have influenced (presumably unconsciously) the design and interpretation of the little research on the subject that was done, and discouraged thorough testing before the new explanation was accepted. Certainly, such false logic even more strongly justified accepting the simplest hypothesis and abandoning the folk wisdom that cold does cause colds, without performing very extensive further research.
This bit of meta-simplicity – the folk idea that causes in general are singular and never complex; that is that there can never be two causes for a single event, was shown in detail to be false logic by John Stuart Mill's writing on causation more than a century ago. As I intend to discuss in a separate essay on causation and genetics, this attachment to singular causation constantly reappears in science, often directly alongside the clear understanding that in some other case, causation is extremely complex, involving multiple factors and complex interactions between them. Hey, it's a helluva timesaver. Just not a reliable one.
Even engineers and medical researchers who frequently deal with multiple risk factors and extremely complex causal “pathways” succumb to this error. But what John Stuart Mill showed in his essay on causation was that causation is always in a logical sense complex. For example a fire requires gravity (which allows convection), oxygen, combustibles as well as, say, a carelessly dropped match. But humans tend to isolate whatever factor is most easily changed (or just the most recent to appear) – the dropped match in this case – as the cause, for the sake of simplicity. I've assumed in this essay that readers do not consciously subscribe to this notion, or delusion. However it is such a natural shortcut that all of us, in this very practical world, do fall into it from time to time. Therefore, it is worth noting explicitly that that we should be especially careful of such a facile, false move when applying Occam's Razor.
But the history of science taken as a whole offers hosts of examples of Occam's Razor (as applied to existing evidence) being wrong, and even a great number of examples that are not at all obscure, even prominent in the history of science. A few of these have already been discussed. Here, as a preliminary to evaluating a bit more precisely the practical value of Occam's Razor and it's predictive power, I wish to examine a few historical examples in detail as a way of further grounding our understanding of the principle, and then to more quickly list a larger bunch of fairly well-know instances from the history of science in which Occam's Razor (as applied to the available evidence) was in error.
It's also interesting to note that the principle objection facing Darwin and those others who had previously proposed that species might have branched out from “common progenitors” was Occam's razor: which had been used to cut down previous theories that tried to introduce as a new scientific entity “species transmutation”. It seemed to more distinguished Biologists, Zoologists and Paleontologists such as Cuvier (Darwin was a geologist) that to propose such new, undiscovered entities as parental species without genuine necessity was unwarranted. To quote Darwin's own early notes: “Cuvier objects to propagation of species by saying, why have not some intermediate forms been discovered between Palaeotherium, Megalonyx, Mastondon, and the species now living?” [Himmelfarb p.152] This use of of Occam's razor by Darwin et al's opponents was worth raising, since introducing new entities without firm evidence that would make them clearly necessary cuts against the Razor, but it was to prove mistaken, nonetheless. To counter this argument, Darwin was forced to present a very diffuse argument, demonstrating that the transmutation of species might explain a very diverse set of phenomenona, some known puzzles and many much more obscure, that might otherwise go unexplained. (In other words, he had to show his readers that a very large amount of downstream complexity was explained by his slightly more complex set of causes.) He ultimately succeeded at this very difficult task, showing that after consideration of a larger body of evidence Occam's Principle was better fulfilled by employing evolution (the Principle of Natural Selection together with hypothesized progenitor or ancestral species) to explain very much by still relatively little. Over twenty years, he had assiduously amassed a great deal of diverse preexisting evidence which could support (be explained by) his theory, and he also provided some new evidence from his own travels and research. Darwin understood very well that mere simplicity was not sufficient for progress in science; that theories with the utmost explanatory power (downstream complexity) must also be sought.
Was yellow fever causes by a germ of some kind? Researchers thought they had isolated a common bacterium that was responsible for the disease in 1899-1900. Yet only a year later the answer was: only if germ-borne diseases could not only be spread by touch, and air, but also by a newly considered entity (or in modern terms, vector), namely mosquitoes; and then only if some new pathogenic entity was responsible, rather than the bacteria which had already (and it would turn out, prematurely) been experimentally associated with yellow fever by Sanarelli and others; and only if, to the considerable surprise of researchers, another new entity (or then unknown factor) in the development of fevers existed: a silent incubation period of many days before the disease manifested itself – a notion initially greeted with profound skepticism. All these new entities, it would turn out, did exist and were part of the story of Yellow Fever transmission. While none of them were necessary to an explanation of Yellow Fever that fit the available data in 1899, such as Sanarelli's theory, they were necessary for an explanation that fit the data available by 1901.  Fortunately, the military importance of Yellow Fever during the American occupation of Cuba after the Spanish-American war, the impending task of building a Panama Canal, and dogged and inventive persistence by Agramonte and others ensured that the theory which Occam's Razor clearly pointed to in 1899 (causation by a known bacteria also infecting hogs) wasn't settled on prematurely, and that research into avenues which, according to Occam's Razor, seemed very, very unlikely indeed (requiring three quite distinct novel entities, while not fitting the then available evidence better) were in fact investigated. Of course, investigators weigh disparate evidence differently. In this case, in 1900, Wasdin and Geddings were satisfied with the given bacterial explanation, and the consistent evidence for it they were finding during autopsies of Yellow Fever victims in Cuba. They were not eager to look further for an explanation. Agramonte felt, in effect, that this explanation was too simple – in particular that it was too much like many previous quite straightforward bacterial “discoveries” of the cause of Yellow Fever which had not been borne out; and that finding more evidence and more kinds of evidence – such as discovering cases in which the suspected bacteria were present but not Yellow Fever, and further investigating other avenues of transmission, such as by mosquito, were worthwhile. His intuition seems to have been that a more complex explanation must be necessary, given the sheer number of false starts and premature “discoveries” of the cause of Yellow Fever that had already been racked up. (An instance, or corollary, of the “low hanging fruit” principle mentioned above, which suggests that the power of Occam's Razor weakens with time and investigation.) Our story ends happily. Surgeon Henry R. Carter's published his observation of a two week latency between transmitted cases in 1900, and a singular case of only one prisoner in a cell becoming infected got the American team in Cuba thinking seriously about infection via insects and mosquitoes for the first time. Previously, such transmission had seemed “discredited by the repeated failure of its most ardent supporter, Dr. Carlos J. Finlay, of Havana, to demonstrate it.” (Agramonte) Soon, proof of an incubation period of many days by Agramonte's associate Dr. Jesse W. Lazear et al, in the same year explained why those earlier experiments had failed to show transmission of the disease by mosquito bite – it wasn't enough to look for immediate transmission, there was a uniform, wholly silent incubation period during which no disease appeared – something which had seemed very unlikely to medical researchers, who had been hoping for a simpler explanation. Nature in this instance, dictated different terms – only a year later the simplest explanation that fit the newly available evidence concerning Yellow Fever had to include a then unknown pathogen not yet associated with the disease, mosquitoes added in as the means of transmission, and thirdly, the novelty of a latency period. Three new entities that the previous bacterial explanation had no need of. (Mosquitoes had been shown to spread Malaria by Ronald Ross in 1897, but were not part of previous explanations of the cause of Yellow Fever.) So 1900 became a triumphant year for medicine, but perhaps not so much so, for Occam's Razor. Nature doesn't always surprise us, but it does frequently surprise us. To quote Dr. Jeffrey P. Friedman., an internist, concerning an apparent case of diabetes that turned out to be caused by an adrenal tumor: "I was taught that when you hear hoofbeats you think about horses, not zebras. But every now and then you hear hoofbeats and it turns out the circus is in town."
The special case of Special Relativity: Einstein's theory didn't fit the available evidence because some of that evidence was just wrong. Making his efforts only more remarkable.
 CO2 is believed to drive the changes in climate, but water vapor does most of the work of reflecting heat radiated from earth, back to earth
 Contrary to most people's intuition, meteorites above a certain size are actually much more likely to explode in the atmosphere and not reach the ground. Most meteorites that reach the ground relatively intact do so because the atmosphere slowed them down fairly quickly, before they were heated to the point of vaporizing. So one shouldn't conclude from the fact that the Tungus meteor exploded that it was small or not very dense. The opposite is true – even though truly huge meteors will also penetrate the atmosphere largely intact.
 Granted these were also large bombs, often measured in megatons, but these explosions occurred much lower in the atmosphere than 10 km, where the meterorite is thought to have exploded, so nuclear tests didn't affect the high atmosphere significantly.
Go back to Logic Tutorial.com