Pages

Thursday, December 12, 2013

Balancing the Evidence on Global Climate Change

Let's get something straight from the beginning.  Humans are spewing carbon dioxide into the air and, while there are a number of sinks that will take up much of it, it takes time and much of what will eventually end up in a sink will spend significant time in the atmosphere first.  This increase in atmospheric carbon dioxide will generally contribute to a higher mean global temperature.  If you are inclined to deny these two assertions, you are, indeed, an unscientific, head-in-the-sand, Climate Change Denier.

However, there is an enormous gulf between CCD's and the-sky-is-falling, apocalyptic prognostications of the Climate Change Industry.  Somewhere in that gulf is the most probable scenario.  Precisely where that point is constitutes one of the most, if not the most important question, facing us today.  That is not because we need to avert the apocalypse but rather because we are very likely to impose upon ourselves a cure that is far worse than the disease.

The Climate Change Industry is an unholy alliance between the climate science community and a small, but highly influential, community of socialists and globalists.  Scientists outside of the climatology specialties are mildly liberal and vaguely globalists.  Consequently, in addition to the 'science team loyalty', they generally look favorably upon any arguments made from the perspective of taking global action for the betterment of Mankind.

I really don't blame the climate scientists.  Well, not much, anyway.  It is an ineluctable truth that governments fund research proportional to the perceived severity of the problem and industry funds research proportional to the perceived opportunity.  Since climatology just isn't likely to get much industry research funding, it is in the best interest of climatologists, within the constraints of scientific integrity, to emphasize the potential dangers of climate change.

Actually, the problem isn't so much with the scientists as it is with the news media that interprets the science.  For example, the atmospheric CO2 has been increasing at the rate of .5% per year for quite some time.  The latest reported concentration was 397 ppm.  So, the naive projection would result in 397*1.005^87 = 613 ppm at century end.  The IPCC AR5 midrange is 650, not significantly different than the naive projection.  This level of increase suggests a temperature increase of about 1.5°C by 2100.  This value falls within the margin of error of all IPCC AR5 scenarios.  It is also not very alarming either in itself or in its implications.

However, that is not the story being told.  When I google 2100 CO2 and temperature, the top returns, exclusive of government sites which accurately convey the AR4 or AR5 position, try to sell as their scenario

  1. 2.8°C
  2. 850 ppm 3.5°C
  3. "On our current emissions path, CO2 levels in 2100 will hit levels last seen when the Earth was 29°F (16°C) hotter"
  4. A factual report on the IPCC AR5 midrange projections.
  5. 800 ppm 3.7°C
Essentially, based upon current Climate Sensitivity analysis the AR5 slightly over estimates temperature increases, but keeps the result consistent with a 3.0°C CS value within their margin of error.  Taking the highest range and, in some cases finding reasons to go even higher than that, the media significantly misrepresents the science. 
 As we can see with the graph to the left, we have a consistent CO2 record for more than 50 years that just doesn't deviate very much from the .5% per year increase in CO2.  The Climate Sensitivity number has been found to be 3.0°C +/- 1.5°C for quite some time and more recent research suggests, if anything, it may be in the lower half of this range.  

So, there really isn't much reason to expect a midrange to be much different than 1.5°C implied by the naive projection.  At least unless there is a substantial change in the amount of CO2 emissions.

Again, a full treatment of the science behind climate change would be very extensive and is not my point here.  Rather I wish to show that the climate science community, knowing on which side their funding bread is buttered, has a mild bias toward the upper range of reasonable projections.  However, the media and politicians with a political agenda strongly biases their projections to the upper end, often arguing for scenarios that exceed the margin of error of the highest projections.

The more serious problem, however, is the use of the apocalyptic scenarios based upon the upper end and beyond projections.  Sadly, the research is generally only done on the deleterious effects of warmer temperatures.  For example, earlier research suggested that climate change would result in a decrease in the food producing capacity of the planet.  It has now been replaced by research that concludes that the amount of arable land will increase sufficiently to feed an additional billion people.  Clearly, average yield per hectare will increase due to longer average growing seasons, but, while it has been sporadically mentioned, no research results have been reported.  Despite this the news is still full of starvation scenarios based upon extended droughts and lower yields per hectare.

The Sahel is greening and global warming is consistently credited.  In fact, some research suggests that the Sahara desert may be gone completely within a century.  However, this rarely gets reported.  There was a flurry of reports when the 2009 research was published, but it is not mentioned anymore.  This is potentially an increase arable land equal to the U.S.


At the beginning of the Global Climate Change phenomenon it was regularly reported that rising temperatures would increase the frequency and severity of weather events.  However, over time, it was discovered that the number of severe tornadoes and hurricanes have actually been decreasing.  We are aware that the decrease in severe hurricanes is probably the result of increases in high altitude wind sheer.  The decrease in tornadoes may be due to reporting bias.  However, there are emerging explanations for a decrease in the severity of landbased storms, as well.  Again, this is rarely reported.


It is true that rising ocean levels will almost surely decrease the amount of shorelines and ocean front property is particularly desirable.  However, the warming planet will increase the total desirability of ocean front land, because warm beaches are preferred to cooler ones.  Total land suitable for human communities, by moving tropical, subtropical and temperate zones poleward, will also increase.

I suspect, but do not know, that global warming will not have significant impact on human activities and, after an initial period of transition, will be mildly positive from a human perspective.  I can't make a confident statement because the research needed to have a balanced position is not being funded.

While global warming will displace species, it is not yet clear exactly what the relationship will be between a warmer planet and biodiversity.  We know that, generally, stable ecosystems decrease diversity while perturbed ones often provide opportunities for speciation. 

Also, much of the climate induced threat to species is exaggerated.  For example, polar bears were put on the endangered species list in 2008, citing global warming.  However, on further review, it appears that polar bear populatons may actually be increasing.


It is also questionable that global carbon emissions will remain at present levels or increase.  Total carbon emissions in the U.S. after remaining relatively flat beginning in 2000 began falling in 2007 and will likely continue to do so.  There are currently 4,800 MW of installed solar plants.  However, that is about to explode with 27,000 MW of capacity either under construction or planned.  This will reduce U.S. carbon emissions by about 1/2 % alone and we are far from done.

This burgeoning use of solar energy to equalize the demand curve is simply not being reported.  As reporters say, 'It doesn't advance the narrative.'  This is a horrible journalistic practice but widespread.  By this statement,they mean that the story is that humans are polluting the environment with their rapacious appetite for energy and if they don't take drastic action there will be Hell to pay.  The rapid transition from hydrocarbons to solar energy for much of the peak shaving electricity requirements, for example, will just confuse the audience and detract from the true story.

Enhanced Geothermal Systems, Ocean Thermal Energy Conversion and Liquid Fluoride Thorium Reactors can meet all of our energy needs at current prices.  Electric cars, getting their energy from the roads, will provide a realistic way to wean our transportation energy use off of oil.  

What this all means is that by 2050 carbon emissions likely will be falling precipitously.  This will result in CO2 atmospheric concentrations being less than even the IPCC AR5 lowest scenario.  It means that by 2100 atmospheric CO2 and global temperatures may be lower than today.  

This will likely happen even without any draconian global treaties which are useless if they exempt India and China, as the Kyoto treaty did.  As we can see, the U.S. emissions are decreasing and Chinese emissions have increased so dramatically that they now surpass the U.S. as the top CO2 emitting nation.  India is now fourth behind Russia.

In other words, the U.S., E.U. and other developed nations are no longer the problem.  Yet, none of this is making the press because, again, it 'doesn't advance the narrative.'

So, the evidence for dramatic anthropogenic global climate change and projected deleterious effects from it is not as strong as is presented in the press.  However, it is non-trivial, either. 

So, as Acting Editor of The Polymath I will institute a regular column that will present evidence for and against the risk from Anthropogenic Global Climate Change.  I will not allow rabid deniers or alarmists.  The idea that you can present hysterical or extreme views and the reader will average them and come up with a considered position is false.  It becomes my lies against your lies and the truth is nowhere to be found.  However, there is a range of perspectives that should be considered by the erudite person.  I will strive to have them presented them honestly and responsibly. 

Please subscribe to The Polymath if you have not already done so.  It will be sporadically published until we have grown sufficiently in circulation to support ourselves as a weekly news magazine.

Thursday, March 14, 2013

Rational Intelligent Design

Intelligent Design has become anathema in most elites.  Many prominent atheists attempt to characterize it as nothing more than Creationism with a new name.  This article calls it 'Intelligent Design Creationism', implying that it is nothing more than a variety of Creationism.  These are sidesteps and should be ignored by thoughtful people.  Intelligent Design, irrespective of any religious assertions, states that the Universe or some aspect of it is the result of intelligent volition.  It does not require that the creator be called God or that 'He' is omniscient, omnipotent, all forgiving, has a plan for people's life, gifts the dead an eternal residence in paradise or that 'He' is imbued with any of the other trappings that religion has conferred upon 'Him'.  

The rational person should place in a hopper all explanations, that are not disproved, for a given phenomenon that is without definitive explanation..  That the explanation is 'scientific', judged to be 'rational' or does or does not meet any other preconceived notion of a 'reasoanble explanation' is not important.  If it explains the phenomenon and is not disproved, it should be included.  To do otherwise is actually irrational.  It is akin to insisting that reality conforms to one's expectations of it.

From these, one should be selected, based upon a judgement of strength of evidence, as a working hypothesis.  This situation ends when either one explanation comes to dominate the evidence or all but one explanation has been eliminated, the famous Sherlock Holmes method.  At that point it becomes doctrine in the current world view and is generally referred to as a fact.  Many 'scientists' would argue that this 'working hypothesis' approach is unscientific.  Yet, they do it all the time.  That the Universe came into being through mechanistic processes is an example of one of their working hypotheses.  There is actually no significant evidence to support the assertion.

Evolution as a mechanism by which speciation takes place is a fact, as defined above.  That it is the only mechanism that caused the simplest of self-replicating molecules to transform into the riot of morphological and biochemical variation we see today does not rise to that level.  In fact, it may not even properly be chosen as the current working hypothesis over the full spectrum of events.  That there was intelligent and volitional intervention in the process is a possible explanation that needs to be placed in the hopper.  

The above is far from a comprehensive treatment of Intelligent Design.  In fact, without precluding the possibility that there are more, there are at least three fundamental questions that relate to Intelligent Design. They are 1) How did the Universe come into existence with physical parameters that seem to be carefully designed to allow the emergence of complexity? 2) How did the first self-replicating molecules come into existence? and 3) How do adaptive traits that require mutations on multiple gene sites and that do not appear to have mechanisms for incremental evolutionary reward come into existence?

In each case Science cannot provide us with a definitive answer. In other words, the questions do not have answers that rise to the level of 'fact'.  Rather, we find ourselves with a basket full of speculations that are not disproven. One of these speculations is that it results from intelligent and volitional intervention. The least problematical explanation should be the working hypothesis of the rational person.


The Finely Tuned Universe

The argument is made that if the Universe did not have the force relationships, fundamental structure, etc. that it has, the resultant universe would not be conducive to the emergence of complex structures, such as life.  It has been a matter of great controversy with some scientists arguing quite clearly to a desired conclusion.  Of course, Theists are also inclined to do so.
  
I would assess that the basic natural laws appear to result from intelligent design and, at least at this time, it is a far more plausible explanation than any of the alternatives. The aversion to include Intelligent Design as an explanation for the start of the Universe has led to, what I consider to be, a scientific embarrassment. Physicists are slowly moving away from the Copenhagen Interpretation of Quantum Mechanics toward the Many Worlds Interpretation, not because it is less problematical, its not, but because that by doing so they avoid the need to conclude that the Universe was intelligently designed.

This is, I believe, an effort to avoid the difficulties of the Strong Anthropic Principle.  To explain this, suppose that you bought a lottery ticket and won the Jackpot.  That someone was going to win it eventually, is a foregone conclusion.  However, that you won it is likely to appear very, very improbable to you.  We, collectively, are not at all surprised that someone won.  This is equivalent to the Weak Anthropic Principle and does not argue for an intelligently designed Universe. 

Now, however, suppose that you had the only lottery ticket and you won.  In this case, we all are likely to think that something is hinky.  The odds that you have won is no different - extremely remote.  However, that the only ticket that existed won is not a certainty but, rather, is as improbable as it is that the winner was you. 

The Finely Tuned Universe, when considered within the context of the Copenhagen Interpretation, leads to a situation similar to one where you win with the only lottery ticket sold.  The Many Worlds Interpretation, on the other hand, is akin to the situation where millions of lottery tickets are sold and a winner is actually likely.  The Weak Anthropic Principle basically says that we should not be surprised that we live in a Universe that is finely tuned because it is a requirement for us to be here thinking about it.

Remember that Physicists originally believed in a steady state universe.  Einstein inserted a 'cosmological constant in order that his equations predicted one.  They believe so, not because they had evidence, but because a 'big bang' Universe implied a moment of creation which smacked of intelligent design. It is amazing how, when the big bang was more or less proven, they blithely moved right past the idea of creation and without a shred of evidence assumed that a natural explanation for creation would be found.

They found one, of course, and it is non-trivial. They claim that 'nothing' is unstable and given enough time it will explosively turn into something. Surprisingly, the mathematics of quantum physics supports that conclusion, however, that interpretation creates other problems that is requiring Physicists to postulate that the 'universe' is actually just one bubble in a sea full of bubbles... Again getting them out of their Strong Anthropic mess.

There is a serious question under this interpretation as to whether the Big Bang was actually the beginning of the Universe.  In other words, does a sea of quantum foam actually equate to nothing or is it actually a Universe with a net zero mass?  This most surely begs the question of how long the quantum foam existed before it explosively turned into the plus sum mass Universe that we have now.  It actually gets quite peculiar because it is precisely the rules of Quantum Physics that says that the quantum foam can spontaneously and explosively turn into something whose net mass is not zero on a universal scale.  The likelihood of this happening is a probability function.  In other words, given enough time it should happen.

So, we find ourselves with the next logical question.  How did it come to be the case that the quantum foam existed and did with the necessary rules to cause the Big Bang to happen?  In other words, while the 'non-divine' creation of the Universe appears to be supported, it really just 'kicks the can down the road.'  Sooner or later, the issue of first cause will raise its ugly (to athiests) head.

In total, looking at cosmology and theoretical physics, the notion that the universe 'just happened' is not disprovable, but it is, well, a bit fishy. Intelligent design, while not provable, solves the problems efficiently and probably should be our working hypothesis solely on the basis of Occam's Razor.  It has seriously been suggested that our Universe was created by 'aliens'.  Scientists are, in essence, giving up and embracing Intelligent Design, but substitute 'alien' where Theists would put 'God.'  Of course, as aliens, we quite reasonably wonder from where they came.


The First Self-Replicating Molecule
 

How did life begin?  Or more precisely, how did the first self-replicating, mutation prone molecule come into existence?  Evolution cannot provide the answer, since it requires replication to operate and, without such a molecule, evolution has nothing to work upon.  Finding a mechanism for creating such a molecule has proven to be very, very difficult. 

The leading candidate right now may be the polymerization of amino acids in suspension and left to dry on clays or crystals.  Experimentally, we know that this leads to proteinoids and oligopeptides that, at least, give more reasonable building blocks than simple amino acids.  To explain, in 1996 David Lee reported a self-replicating molecule of 32 amino acids.  The specific sequence has the probability of forming randomly of 4^32 or 18,446,744,073,709,600,000:1 against.  In other words, randomly, it just isn't going to happen.  He created the self-replicating molecule, however, by combining two polypeptides of 17 and 15 amino acid chains.  This changes the probability to 4^17+4^15 or 18,253,611,008:1 against or a billion times more likely.

This, then, leads us to an imagined scenario where a primordial soup splashes through tidal action upon volcanically heated rocks, slowly turning the primordial soup from one constitute of amino acids to one containing protenoids and oligopeptides.  These compounds, in the quadrillions combine randomly in the sea until a self-replicating combination happens.  The rest is evolutionary history.

The currently strongest hypothesis suggests that life began by these self-replicating molecules, through evolution, tranforming into self-replicating RNA sequences and, from there, into self-replicating DNA.  There are conditions that quite likely existed in prebiotic Earth that can create 'cells' that could capture the correct combination of self-replicating molecules and thereby assist in creating a controlled environment within which the evolution to RNA and then DNA to exist.

This doesn't explain it all.  It has been suggested that in order to sustain itself over a sufficiently extended period for evolution to take hold, these original cells would need to have a photosynthetic metabolism.  Additionally, even if the absence of mitochondria, some method of metabolic processes would need to exist.  Once again, we are stretching our credulity to accept that all of this could happen at once within one protocell.

Given the above, Intelligent Design does have a reasonable alternative.  However, there are two caveats.  First, where did these precise qualities of molecules come from and how reasonable is it that that characteristics of Carbon, water, Phosphorus and other compounds are such that they are capable of creating such a complex and self-sustaining system?  It is a more fundamental and philosophical question that is reminiscent of the Finely Tuned Universe.  We might call it the 'Finely Tuned Organic Chemistry' question.  Second, the application of Occam's Razor does not really militate strongly for this abiogenisis hypothesis over the Intelligent Design hypothesis.  Until we have it 'nailed down', the abiogenesis hypothesis is seems more than a little like a 'just so' story.

Another hypothesis that has found favor in the scientific community is the panspermia hypothesis.  Quite simply it argues that the first self-replicating molecules blew in on the stellar wind.  Or, conversely, it is hypothesized that an extra-solar body collided with the Earth and then, somehow, survived the trauma of impact and began replicating.  Panspermia hypotheses have two serious problems.  First, there is, so far, no plausible delivery mechanism.  Second, it, too, kicks the can down the road.  Where did these self-replicating molecules come from and how did they form there?



Recently it was suggested, seriously, that 'aliens did it.' Two problems with that. First, what, precisely is the difference between God and an Alien Creator? Second, where did the aliens come from? Again, the Intelligent Design can is being kicked down the road, not eliminated.

In the final analysis, we do not, at present, know how life began, either here or elsewhere.  A naturalistic process is not unreasonable, though it suffers from the 'fine tuned organic chemistry' problem.  Intelligent Design is and should be in the hopper of possible solutions, though characterizing them as aliens or God is unsupported and, frankly, not helpful.  At present, Occam's Razor would appear to advise us to that some sort of intelligent and volitional agent interceded to reduce the improbability.

The Phyla and Class Problem


Defined as natural selection acting upon random mutations works well in explaining how species radiate within a genus and perhaps explains how genera emerge from within a family. However, it is ill-equipped to explain the Cambrian explosion of phyla and, most likely, the emergence of many Classes. 

I first became aware of this problem with evolution in a Physical Anthropology book that was questioning how evolution could have resulted in bipedalism, if its survival advantage was increasing height in order to see over the savannah grass without a concomitant increase in size.  In essence, there didn't seem to be any benefit to partial bipedalism, since it would be insufficient to provide the benefit of seeing over the grass.  While there are several things wrong with this argument, it got me thinking about the problem of multiple mutations being required to provide the first increment of survival benefit.

I have had the most difficulty with birds and their modifications for the sake of flight.  In order to accomplish the feat, bird must, 1) modify their presumed scales into feathers 2) redesign their bones to make them much lighter without sacrificing too much in strength, 3) reconfigure their front legs in such a way to allow them, with the feathers, to create a wind foil in the form of wings.  Any one of these modifications does not seem to convey any survival benefit on their own.  Only in combination do they allow for flight and its clear survival benefits, both as a predator and as a prey.

The problem is that flight is extraordinarily difficult for animals as big as birds.  In other words, flight seems to require close to complete evolution to take place simultaneously in three separate genetic polymorphisms before any natural selection can act upon the genome.  In fact, partial modifications  would appear to be anti-survival.  If there is a naturalistic explanation, it would appear that we would need to find a different evolutionary mechanism.



Might an intelligent and volitional agent have interceded to make these large jumps in evolution? It certainly is less problematical than any existing theory. Isaac Asimov once suggested that perhaps these large morphological changes are somehow implicit in the chemistry of DNA. Even if we were to accept that, again, we are just kicking the intelligent design can down the road.   How likely is it that the chemistry of DNA would just happen to be such as to cause such dramatic and adaptive modifications to take place naturally.  It suggests that the very structure of chemistry has been purposefully contrived so as to accommodate the needs of life.



The Lesson from Cellular Automata


In the 1940's Stanislaw Ulam and John von Neumann were the first to work with cellular automata.  The are, essentially, a simple set of rules applied to a grid where the condition (visually black and white is the most common) of every square in the grid at t+1 is determined by the condition of its neighbors at t. Typically, these systems will have four rules, one for birth, one for death and one describing the initial state of the system.


In the 1970's they were popularized by John Horton Conway as 'The Game of Life'.  For the past twenty years, Steven Wolfram has developed them and has reached a number of controversial conclusions. One of his research assistants, Matthew Cook, demonstrated that some sets of rules can be Turing Complete.  The quick explanation is that the set of rules can be used to describe any desired state.

What this means is that some, far from all, sets of simple rules can lead to evolution, complexity and stability that is far from intuitively inherent in the rules.  This is significant to Intelligent Design because it can be used to support Asimov's conjecture that the rules that govern DNA may have implicit in them complexities that are not clearly apparent.  For example, some precursor organism suddenly and simultaneously developing wings, light bones and feathers may not be a random event.  It may be implicit in the DNA itself.

Most dilettantes imagine that there is a long string of amino acids on the DNA molecule where the probability of a base pair substitution is about the same over the full length of the strand.  This is not the case.  While determining mutational rates at a particular gene site is very difficult, we have learned enough to know that, as an example, mutations in eye color are different than, say, the mutation rate for hair color.  Also, the mutation rate for blue to brown, blue to green, brown to green, brown to blue, green to blue and green to brown are probably all different.

What this means is that between celluar automata and differential mutational rates, populations exposed to highly mutagenic environments may with a reasonable probability create organisms with a patterned suite of mutations.  However, there is still something more than a little hinky about the notion that of all the possible mutational combinations, ones such as wings/light bones/feathers, should emerge.  While providing a mechanism for phyla sized morphological differences to emerge, it would seem to suggest that the rules underlying mutation rates has been intelligently manipulated.

However, anyone who experiences the complex and apparently designed systems that can emerge in cellular automata is likely to be inclined to accept that such a complex system of mutations, as suggested by Asimov, may be inherent in the chemistry of DNA.  


So, as should be the case in unsettled scientific questions, there are reasons to favor either a naturalistic universe or a intelligently designed one.  However, on balance, intelligent design is substantially less complex that the purely naturalistic universe.  While carbon based complexity may be inherent in the laws of the universe, the structure of those laws stretches one's credulity as a natural phenomenon.
 

Consequently, as a working hypothesis, I am a Deist.  In other words, I operate on the assumption that the Universe is the result of intelligent volition, but I do not assume that the creator(s) have any particular interest in me or humanity in general.  I do not even assume that the creator(s) even still exists.  It is often stated that the assertion of Intelligent Design is not scientific because it is not amenable to verification.  Actually, that is a bit of a red herring.  That the Universe is a product of natural processes is also not amenable to verification.  Yet it is a foundational assertion of science.

Actually, it is not necessarily the case for either intelligent design or naturalism that they are not subject to verification.  We are not likely to find direct experimental evidence for either.  However, it is not the only route to truth.  Proof through reductio ad absurdum is an acceptable scientific approach.  Both positions could be amenable to it.  The more we know about the Universe the more likely that a disproof of one will surface.  Since either the Universe was volitionally created or it wasn't, the disproof of one serves as a proof of the other. 

It is ill-advised for advocates of Intelligent Design to militate for its teaching in schools, although something like the above could fall legitimately within the purview of a Philosophy of Science section.  Absent that, teaching evolution without mention to the problems it has as a universal explainer of life or teaching Physics without introducing the student to the problem of creation, is irresponsible.  The Mediocracy generally believes in naturalism, generally and evolution, specifically, as such and that is why it is difficult to change the curriculum. 

Here is a logical peculiarity in our science curriculum.  If a teacher were to mention all the possible explanations for the emergence of life and included in that, 'aliens did it', the teacher would be in no trouble.  However, 'aliens' are just one of several possible intelligent designers.  Logically, they fall within that proscribed category of explanation. What we have here is not scientific principles.  What we have is an animus of the educational system toward religion and by extension any explanation that would not categorically preclude a God.

Anyway, I titled this essay, 'Rational Intelligent Design' and I believe that I have successfully defended my position that Intelligent Design is the most rational working hypothesis in answer to several open scientific questions.  There is a very strong tendency within the Liberal Culture to characterize adherents of Intelligent Design as universally ignorant and stupid.  I am profoundly not either and I hope that disproves that offensive little piece of ad hominem.

Tuesday, February 19, 2013

An Elaboration on the Monty Hall Problem

Many, many years ago I breezed into a Mensa meeting and saw then Chancellor of the Triple Nine Society, Cyd Bergdorf, and Ron Hoeflin working busily on some kind of problem.  I knew Cyd personally, but I knew Ron only by reputation.  Being the gadfly that I was, I walked over, sat down and asked what they were working on.  It was the now famous 'Monty Hall Problem.'  They explained it to me.  It goes like this:

Suppose you're on a game show, and you're given the choice of three doors: Behind one door is a car; behind the others, goats. You pick a door, say No. 1, and the host, who knows what's behind the doors, opens another door, say No. 3, which has a goat. He then says to you, "Do you want to pick door No. 2?" Is it to your advantage to switch your choice?

I immediately said, 'You switch, of course.  And, by the way, the qualification that the host knows what is behind the door is unnecessary.  I'm going to go get a drink.'  This remark that the qualification is not needed has led to a never ending dispute with other high IQ people, most recently Garth Zietsman and Rick Rosner.  So, I am going to put the argument here where it may be able to garner a larger audience.

To make it very clear, I am stating the the reservation is unnecessary precisely in the context of the problem as stated.  Thee are a whole lot of Bayesian assumptions that, I argue, do not apply because there is nothing in the problem that states that it should be taken as one is a series of similar games.

First, I will reproduce verbatim what I found to be the most clearly stated positions of Rick Rosner and Garth Zeitsman.  If you want to read the full dialogue, it is available here

Rick Rosner
Three doors, you pick one, then Monty randomly opens one. 1/3 chance the game is wrecked, because Monty prematurely revealed a car - he's never supposed to reveal the car until you make your final choice of door. 2/3 chance the game isn't wrecked. Two equal possibilities among the 2/3 of games that aren't wrecked - car is behind your first choice, or car is behind the door Monty didn't open. Random door-opening seems to either wreck games or leave an equal probability among the remaining unopened doors.

Garth Zeitsman
There is an urn with one blue and two green balls. A selects a ball followed by B (randomly) who notes his ball is green. If A had selected a blue ball B would have had two ways to select a green ball. If A had selected a green ball B would have had one way in which to select a green ball. However A had two ways in which to select a green and one way to select a blue ball. So the chance of A
having selected a blue given that B's was green is 1*2/(1*2 + 2*1) = 1/2.

When B can look into the urn and deliberately pick out a green then his probability of selecting green is always 1 and the probability of A selecting blue given that B has green is simply the initial probability of his selecting blue.


Michael Ferguson
Now both Rick and Garth are correct if we assume that the game described is one element in a string of games where n>>1.  However, that is not stated in the problem.  We have no reason to assume that this game has ever been played before or ever will be played again.  If that was the problem, they would be correct.  We could expect to improve our odds from one third to two thirds if Monty knows what is behind the doors and chooses to always open a door with a goat behind it.  We would expect that our odds would be the same by either staying or switching if Monty doesn't know what is behind the doors.  However, that is not the problem as stated.

When we flip a coin, the natural odds of a fair toss coming up heads is 1/2.  If we choose one door out of three the natural odds of choosing a car is 1/3.  One might ask how this natural odds of 1/3 could somehow change to 1/2.  The answer is that it does not.  Something very different is the cause.

Suppose we play 12 games.  We would expect that we will choose a door car 4 times.  That gives us the 4/12=1/3 natural odds.  If Monty always opens a goat door we didn't choose, we will have 8 unopened car doors and the odds if we switch is 8/12 or 2/3.  However, if Monty doesn't know, we will have 4 games where we chose a car door, 4 games where the unchosen door has a car and we will have four games where Monty, as Rick puts it, wrecks the game.  So, now, switch or not, we should expect to win 4 cars.

So the odds to not change from the natural 1/3 to 1/2 actually.  Rather, we have created a subset that doesn't include four unopened car doors.  This artificially decreases the probability that the unchosen door has a car.  In other words, and this is absolutely the crux of the issue, the lowering of the odds does not reside in the individual games but rather is a characteristic of the subset that is created.  The changing odds is directly a result of opening car doors.

In other words, by resorting to Bayesian reasoning, Garth and Rick are calling into existence games where car doors are opened when, in fact, there is no indication in the language of the problem that there ever will be played any other games, with or without Monty knowing what is behind the door.  In the only case of the game that we know to exist, Monty revealed a goat.

This game naturally belongs to the set of games string of n length in which no car doors are opened.  By naturally, I mean that there is no way to take this game out of that set.  If I could state that this game belongs to a set of n games in which Monty Hall knows what is behind the doors.  I could take it out of that set by stating that Monty Hall doesn't know what is behind the doors.  But it irrevocably belongs to this set.

I then state that a common factor of all game strings, regardless of the value of n, is that we expect that there are twice as many cars behind the unchosen door than there are between the chosen door.  To illustrate the significance I use the following thought problem.

Suppose we play twelve simultaneous games. We choose one door in each of twelve sets of three doors. Our expectation is that we will choose a car four times and will choose a goat the other eight times. Then Monty opens one door in each set of three doors and reveals a goat. Now, we have chosen a car door four times and the twelve unchosen doors contain the other eight cars. Clearly, we should switch doors when asked.

Now after all this transpires, Monty tells us that he didn't know what was behind all the doors. We are surprised because the odds against him choosing only goats in twelve straight games is (2/3)^12 or approximately 130:1. Yet, it does not change the proper strategy; there are still only four cars behind the chosen doors and eight behind the unchosen doors.

Now suppose that we play three simultaneous games. The logic is the same. Our chosen door has only one car and the unchosen door has two. We should switch. We are not so surprised when Monty tells us that he didn't know what was behind the doors because the odds against have fallen to (2/3)^3 or just a little over 3:1. In fact, for n games in which no car doors are opened, no matter the value of n, switching doubles our chances.

This happens because there is nothing magical about Monty knowing or not. The advantage falls from 2:1 to 1:1 by the process of games with opened car doors being eliminated from the pool. In other words, the odds do not change one iota UNTIL a car door is opened. When we are presented one game in which a goat was revealed and n=1, there are no car door openings that can change the odds of having chosen a car door from 1/3 to 1/2. Consequently, in one game in which a goat was revealed, my initial probability of 1/3 of having chosen a car door remains. The unchosen door still contains 2/3 of the probability.


For some reason, even people at the highest IQ level can have difficulty in grasping this.   I want to make this clear that I am not arguing against Bayesian probability.  I am only arguing that it is applicable only in those cases where multiple 'runs' of the game are taking place.