Wednesday, May 23, 2007

Occam's Razor

It may be refreshing to review at this point the medieval philosopher William of Occam's insight.

"One should not increase, beyond what is necessary, the number of entities required to explain anything"

Occam's razor is a logical principle attributed to the medieval philosopher William of Occam (or Ockham). The principle states that one should not make more assumptions than the minimum needed. This principle is often called the principle of parsimony. It underlies all scientific modelling and theory building. It admonishes us to choose from a set of otherwise equivalent models of a given phenomenon the simplest one. In any given model, Occam's razor helps us to "shave off" those concepts, variables or constructs that are not really needed to explain the phenomenon. By doing that, developing the model will become much easier, and there is less chance of introducing inconsistencies, ambiguities and redundancies.
Though the principle may seem rather trivial, it is essential for model building because of what is known as the "under determination of theories by data". For a given set of observations or data, there is always an infinite number of possible models explaining those same data. This is because a model normally represents an infinite number of possible cases, of which the observed cases are only a finite subset. The non-observed cases are inferred by postulating general rules covering both actual and potential observations.
For example, through two data points in a diagram you can always draw a straight line, and induce that all further observations will lie on that line. However, you could also draw an infinite variety of the most complicated curves passing through those same two points, and these curves would fit the empirical data just as well. Only Occam's razor would in this case guide you in choosing the "straight" (i.e. linear) relation as best candidate model. A similar reasoning can be made for n data points lying in any kind of distribution.
Occam's razor is especially important for universal models such as the ones developed in General Systems Theory, mathematics or philosophy, because there the subject domain is of an unlimited complexity. If one starts with too complicated foundations for a theory that potentially encompasses the universe, the chances of getting any manageable model are very slim indeed. Moreover, the principle is sometimes the only remaining guideline when entering domains of such a high level of abstraction that no concrete tests or observations can decide between rival models. In mathematical modelling of systems, the principle can be made more concrete in the form of the principle of uncertainty maximization: from your data, induce that model which minimizes the number of additional assumptions.
This principle is part of epistemology, and can be motivated by the requirement of maximal simplicity of cognitive models. However, its significance might be extended to metaphysics if it is interpreted as saying that simpler models are more likely to be correct than complex ones, in other words, that "nature" prefers simplicity.

It will be awfully redundant at this point to reiterate that the theory of dipole gravity doesn't have any assumptions other than the ones general relativity is based on itself. With such a minimal number of assumptions, the number of areas of cosmological problems it touches and provides answers are truly remarkable. The only way it can be wrong is if and only if general relativity is wrong. This perspective gives us the compelling reason to test the predictions of dipole gravity in the terrestrial experiment as soon as possible.
http://dipoleantigravity.blogspot.com/2007/04/alternative-method-of-detecting-dipole.html

No comments:

Post a Comment