
What exactly prevents us from seeing stars during the day? I always believed that sunlight reflected off particles in the air, making them visible. And that would cause the stars to fade away. But some claim the reason stars aren't seen in moon landing photos is due to them being taken during lunar days. But since the moon has no atmosphere, I must be mistaken.Rebecca Pitts:
Your reasoning isn't wrong, just incomplete. You’re applying the same principles to two distinct scenarios: Sunlight can scatter off anything between a light source and the observer—including the parts of your eye before your retinas—but even without this, it would still be challenging to spot the stars. The Sun, along with objects reflecting its light, is simply too bright in contrast to the surrounding sky.
To illustrate how much brighter the Sun and the daytime sky are compared to the stars, let me introduce a peculiar method astronomers use to measure brightness in relation to each other or to a standard star. This system, known as the Magnitude system, may not make much sense today as it’s an ancient concept, dating back 2000 years to Hipparchus/Ptolemy (it’s so old, we can’t even agree on its exact origin). The key details are shown in the following images:
Astronomy 3130 [Spring 2015] Course Homepage, Photometry lecture.
(Just so you know, that infographic is a bit too optimistic in one respect: the limit visible to the naked eye in most cities is closer to 3rd magnitude.)
To place the Sun and Moon on that scale and demonstrate just how far the magnitude system can extend into the negative range, consider this:
The brightness of the daytime sky is so intense that it overpowers anything dimmer than magnitude -4. So, indeed, on Earth, the atmosphere is the culprit, primarily due to Rayleigh Scattering.
But what about scenarios where the atmosphere isn’t a factor?
By combining the data from both figures, the full moon shines at least 25,000 times brighter than Sirius. The sun, however, is 400,000 times brighter than that, making it 10,000,000,000 times brighter than the brightest star visible in the night sky. A candle's brightness is about 1 candela (the SI unit of brightness). What’s 10,000,000,000 times brighter than a candle? Consider something like the Luxor Sky Beam in Las Vegas, which shines with a brightness of 42.3 billion candela. Trying to spot a star while the sun is in view will never be easier than attempting to see a few candles while facing the world’s most powerful spotlight.
The ratio of signal intensity (or brightness in the case of light) between the faintest detectable signal and the point at which your instrument reaches its maximum capacity (saturation) is called dynamic range, which essentially represents the maximum contrast ratio. So to photograph the sun and have another star appear in the same shot, your sensor needs a dynamic range of 10 billion. The dynamic ranges of existing technologies are as follows:
- Charge Coupled Devices (CCDs, used in digital cameras): 70,000–500,000, depending on the grade (16-bit Analogue-to-Digital converter software typically used with consumer- and education-grade CCDs will reduce this to about 50,000)
- Charge-Injection Devices (the more advanced version of the CCD, where pixels are treated individually rather than in rows and columns): 20 million, as this PDF shows.
- Human Eye: variable, but typically maxing out at around 15,000
- Photographic Film: a few hundred. Yes—that's all.
To make matters worse, film doesn’t even respond to 98 to 99 percent of the light that hits it. Your eye is just as inefficient, but at least it has a dynamic range closer to that of a CCD than film. CCDs will capture upwards of 90 percent of the incident light. You can read more about the advantages of CCDs here (their dynamic range stat for film is slightly low). But back in the 1960s, CCDs didn’t exist. NASA had to rely on film. (Here’s an entire article about NASA’s film supplies and their specs during the Apollo Program.)
At the Earth’s (and Moon’s) distance from the sun, every square meter of surface receives about 342 watts per square meter (W/m^2) of energy from the sun (see Solar Radiation at Earth). If the sun is directly overhead, that value is closer to 1368 W/m^2, but for simplicity, let’s use 342 W/m^2 as it represents the average for the sun-facing hemisphere, with most surfaces at some angle to the sun. The Moon reflects about 12 percent of the light that hits it. While this might seem like a small amount, for the Apollo astronauts, this meant standing on a surface where every square meter was, on average, as bright as a typical desk lamp. The astronauts' white suits and the highly reflective landing modules were even brighter. From the perspective of the film, the Apollo astronauts were like floodlights standing in a lamp store. This type of light pollution makes astrophotography quite challenging.
Regardless of the technology at hand, choosing the right exposure time is crucial to capturing the intended subject and minimizing unwanted details. For the Apollo crews, background stars were not part of their focus while studying the Moon, so their exposure times were specifically calculated to highlight the Moon’s surface, astronauts, landing sites, and other key elements. As a result, the exposure times for most Apollo photos were so brief that the photo emulsion didn’t gather enough light from the stars in the background to register any of them.
However, there are Apollo images that do feature stars. But since stars were never the focus, these images aren’t as visually striking, as demonstrated by these UV shots from Apollo 16:
NASA
NASA (*Note - false color UV photo of Earth’s Geocorona in 3 filters, somewhat misaligned based on the positioning of the stars)
This post originally appeared on Quora. Click here to view.
