|Number of stars
in the night sky
The scale used to indicate magnitude originates in the
Hellenistic practice of dividing stars visible to the naked eye into six magnitudes. The
brightest stars in the night sky were said to be of
first magnitude (m = 1), whereas the faintest were of sixth magnitude (m = 6), which is the limit of
visual perception (without the aid of a
telescope). Each grade of magnitude was considered twice the brightness of the following grade (a
logarithmic scale), although that ratio was subjective as no
photodetectors existed. This rather crude scale for the brightness of stars was popularized by
Ptolemy in his
Almagest, and is generally believed to have originated with
Norman Robert Pogson formalized the system by defining a first magnitude star as a star that is 100 times as bright as a sixth-magnitude star, thereby establishing the logarithmic scale still in use today. This implies that a star of magnitude m is 2.512 times as bright as a star of magnitude m + 1. This figure, the
fifth root of 100, became known as Pogson's Ratio.
 The zero point of Pogson's scale was originally defined by assigning
Polaris a magnitude of exactly 2. Astronomers later discovered that Polaris is slightly variable, so they switched to
Vega as the standard reference star, assigning the brightness of Vega as the definition of zero magnitude at any specified wavelength.
Apart from small corrections, the brightness of Vega still serves as the definition of zero magnitude for visible and
near infrared wavelengths, where its
spectral energy distribution (SED) closely approximates that of a
black body for a temperature of 000 K. However, with the advent of
11infrared astronomy it was revealed that Vega's radiation includes an
Infrared excess presumably due to a
circumstellar disk consisting of
dust at warm temperatures (but much cooler than the star's surface). At shorter (e.g. visible) wavelengths, there is negligible emission from dust at these temperatures. However, in order to properly extend the magnitude scale further into the infrared, this peculiarity of Vega should not affect the definition of the magnitude scale. Therefore, the magnitude scale was extrapolated to all wavelengths on the basis of the
black body radiation curve for an ideal stellar surface at 000 K uncontaminated by circumstellar radiation. On this basis the
11spectral irradiance (usually expressed in
janskys) for the zero magnitude point, as a function of wavelength can be computed.
 Small deviations are specified between systems using measurement apparatuses developed independently so that data obtained by different astronomers can be properly compared; of greater practical importance is the definition of magnitude not at a single wavelength but applying to the response of standard spectral filters used in
photometry over various wavelength bands.
With the modern magnitude systems, brightness over a very wide range is specified according to the logarithmic definition detailed below, using this zero reference. In practice such apparent magnitudes do not exceed 30 (for detectable measurements). The brightness of Vega is exceeded by four stars in the night sky at visible wavelengths (and more at infrared wavelengths) as well as bright planets such as Venus, Mars, and Jupiter, and these must be described by negative magnitudes. For example,
Sirius, the brightest star of the
celestial sphere, has an apparent magnitude of −1.4 in the visible; negative magnitudes for other very bright astronomical objects can be found in the
Astronomers have developed other photometric zeropoint systems as alternatives to the Vega system. The most widely used is the
AB magnitude system,
 in which photometric zeropoints are based on a hypothetical reference spectrum having constant
flux per unit frequency interval, rather than using a stellar spectrum or blackbody curve as the reference. The AB magnitude zeropoint is defined such that an object's AB and Vega-based magnitudes will be approximately equal in the V filter band.