Cosmology Article Links
By Dr. Donold Scott
"Why is the sky dark at night?"
According to the following logical thought sequence (mathematical derivation), it should be horrendously bright.
1) The apparent intensity of a light source decreases with the square of its distance from the observer. (Assuming no interstellar dust absorption, this is true. Lumens received from a star will vary inversely as the square of the distance to that star.)
2) If the distribution of stars is uniform in space, then the number of stars at a particular distance, r, from the observer will be proportional to the surface area of a sphere whose radius is that distance. This area is directly proportional to the distance squared. A = (pi)r^2
3) Therefore, at each and every possible radial distance, r, the amount of light coming toward us should be both directly proportional to the radius squared (the number of stars) and inversely proportional to the radius squared (they get dimmer with distance).
4) These two effects cancel each other.
5) So every spherical shell of radius r should add the same additional amount of light.
6) Ergo: In an infinite universe, if we sum (integrate) the light coming from all the infinite number of possible values of r, the sky should be infinitely bright.
But the sky is not infinitely bright. Why?
The resolution of this paradox can be achieved by considering how astronomers solved the problem of defining the ABSOLUTE luminosity (brightness) of a star. Because of (1) above, the more distant a star is, the dimmer it appears to be. In order to set up a standard, astronomers arbitrarily agreed that if a star was placed at a distance of 10 parsecs (approximately 32 1*2 light-years) from us and if it looked like a magnitude 1.0 star at that distance, they would agree to say that its ABSOLUTE LUMINOSITY was 1.0.
There is a well-known relationship between distance and apparent magnitude of a star. For example, if we put that same 1st magnitude star at a distance of 517 LY (light-years), its APPARENT MAGNITUDE would be only 6.0. Humans cannot see any star whose magnitude is higher (less luminous) than 6.4. The 200 inch Hale telescope at Mt. Palomar can see down to about magnitude 23 or so.
There are approximately 8400 stars in our night sky that are brighter than magnitude 6.4. We do not see the others; they are too dim. Yes, yes, Carl Sagan used to talk about millions and millions of stars ? but we can only see about 8400 with our naked eyes. Carl was well known for his tendency to exaggerate. We get the impression of millions and millions when we look up at the Milky Way, but we can see only 8400 stars ? that's it ? and that's under ideal conditions.
Of course, some stars are VERY much brighter than absolute magnitude 1.0 and thus would be visible farther out than 517 LY. But, many are much dimmer too, so as a rough approximation let us consider the average star.
If it is farther away than 517 LY, we cannot see it (AT ALL). So it might as well not be there AT ALL. The total light in our night sky (at least the way we can see it with our naked eyes) is not affected by much of anything that is dimmer than magnitude 6.4 (typical stars farther away than around 517 LY). Even for the blue-white giant stars whose absolute luminosity puts them at ?10 or ?12 (much brighter than absolute magnitude 1.0), there exists some finite distance beyond which they too become invisible to us ? their apparent magnitude slips down beyond 6.4.
There are a very few vastly distant objects that we can see such as the Great Andromeda Galaxy M 31. It is over 3 million LYs away. But it is such a concentrated collection of stars and plasma that it looks to us about as bright as a single magnitude 4 star.
The point is this ? the infinite sum implied in step (5), above, is incorrect. The sum STOPS (is truncated) at a distance of about 500+ light years for the typical star (and somewhere beyond that even for the brightest ones). There is an upper limit on the absolute brightness of a single star; there is no such thing as an infinitely brilliant star. So there is a finite upper limit to the integration process described in step (5) above. It doesn't go out to infinity.
It may also help to remember that the human eye is different from photographic film or a CCD chip. It does not integrate over time. The longer we expose a photographic plate to starlight the brighter the image becomes. (There is a limit even to this process in film due to what is called reciprocity failure.) But, humans can stare at the night sky all night long and not see anything they didn't see after the first few minutes. Things don't get brighter for us the longer we look at them. So theoretically the longer we expose our CCD camera chip, the brighter the image (deeper into space we can see). This is not true for the human eye.
We can see the 8400 or so stars that we can see, and all the zillions of others might as well not be there AT ALL as far as our humble naked human eyes are concerned.
Olber's Paradox is not a paradox at all if you look at it correctly. It is yet another example of theoretical mathematics applied incorrectly to a real world phenomenon. Or a mathematician might say, "They got the upper limit on the integral wrong."