By Joshua N. Winn, Princeton University
The best way to think of a research telescope is a machine for measuring the properties of photons from celestial bodies. And what are those properties? The direction it came from, its energy or wavelength, and the time of its arrival. And then there is one more thing. What is that?
Research Telescopes and Polarization
Apart from measuring the properties of photons such as their wavelength, direction and time of arrival, research telescopes also measure their polarization. Polarization is one of photon’s wavelike properties and refers to the direction in which the electric field of the wave is oscillating.
Another thing worth mentioning with respect to research telescopes, is the traditional system of units optical astronomers use to measure the flux of a source, the power per unit area arriving at the Earth in the form of light.
Measuring the Flux
It would be logical to express flux in standard metric units, watts per square meter. But we hardly ever do that. By hallowed tradition, we use a different scheme, called a ‘magnitude scale’. The apparent magnitude, m, of a source is defined as −2.5 times the log (base 10) of the flux relative to a reference flux F0.
One can appreciate the fact that it is a logarithmic scale. It’s so to keep the numbers manageable, even though astronomers deal with fluxes that range by many orders of magnitude, from nearby stars and planets to the most distant galaxies. It’s the same reason we use the Richter scale, for earthquakes, and the decibel scale, for sound waves. Those are both logarithmic scales, too. But, why the minus sign? And the 2.5? And what’s the reference flux?
This article comes directly from content in the video series Introduction to Astrophysics. Watch it now, on Wondrium.
Let’s start with the easiest one, the reference flux. The choice was made to set F0 equal to the flux of Vega, a bright star in the northern sky. That way, one doesn’t have to worry about calibrating their camera to measure starlight in watts per square meter. They can just compare the flux of their star to the flux of Vega. And then, if they ever want to convert to standard units, they can just look up the flux of Vega in watts per square meter, which somebody, somewhere, has already measured.
What about that minus sign? Because of that, brighter objects have lower magnitudes. It makes the magnitude scale like a ranking system for flux: a 1st magnitude star is brighter than a 2nd magnitude star, which is brighter than a 3rd, and so on.
The 2.5 is a little harder to explain. When we invert the magnitude equation, we find F equals F0 times 10 to the −0.4 times m. So, a star with zero magnitude has F equals F0—that is, the star has the same flux as Vega. And a star with magnitude 1 is fainter by a factor of 10 to the −0.4, or about 40%. If we go down to a 5th magnitude star, F equals F0 times 10 to the power of −0.4 times 5; that’s 10 to the −2, 100th. The lesson is that increasing the magnitude by 5 units corresponds to lowering the flux by a factor of 100.
Hipparchus of Nicaea
That convention of lowering the flux by a factor of 100 can be traced back to the ancient astronomer Hipparchus of Nicaea, from the 2nd century BCE. He created a star catalog and ranked the stars from 1 to 6 based on eyeball estimates of brightness. His catalog was influential and widely used by astronomers for centuries. Nearly 2000 years later, we know the human eye has a logarithmic response, so Hipparchus’s magnitude scale was a logarithmic flux scale.
As it turned out, the stars he ranked 6th magnitude are about 100 times fainter than 1st magnitude stars. The subsequent mathematical definition of apparent magnitude reflected a desire to match the ancient star catalog of Hipparchus.
We can also use magnitudes to represent color, in addition to flux. One way to quantify the color of a light source is to measure its flux through a colored filter, say, blue, that only admits short-wavelength photons. Then measure its flux through a red filter, and take the ratio, FB over FR. Objects that are intrinsically blue will have a higher ratio than red objects.
The color index is defined as the difference between the 2 corresponding apparent magnitudes. When we write that difference in terms of fluxes, using the definition of apparent magnitudes, and then use some standard log properties to rearrange things, we see that the color index is a logarithmic scale for the flux ratio, FB over FR. Magnitudes can also be used to represent luminosity, in addition to flux whereby luminosity refers to the intrinsic power of a source, independently of its distance from Earth. For a source at distance d that emits equally in all directions, the relation between luminosity and flux is F equals L over 4 pi d squared.
Instead of expressing the luminosity of a source in watts, or as a multiple of the Sun’s luminosity, sometimes we express it as an absolute magnitude. That’s defined as the apparent magnitude the source would have, if we could magically reposition it 10 parsecs away. Ten parsecs is an arbitrary choice. The point is that if we put everything at a common distance, then the differences in apparent magnitude signify differences in luminosity.
Common Questions about Research Telescopes
Research telescopes measure a photon’s wavelength, direction, time of arrival, and polarization.
The convention of lowering the flux by a factor of 100 can be traced back to the ancient astronomer Hipparchus of Nicaea, from the 2nd century BCE. He created a star catalog and ranked the stars from 1 to 6 based on eyeball estimates of brightness.
Sun’s luminosity is sometimes expressed as an absolute magnitude.