Image noise

Noise clearly visible in an image from a digital camera

Image noise is random (not present in the object imaged) variation of brightness or color information in images, and is usually an aspect of electronic noise. It can be produced by the sensor and circuitry of a scanner or digital camera. Image noise can also originate in film grain and in the unavoidable shot noise of an ideal photon detector. Image noise is an undesirable by-product of image capture that adds spurious and extraneous information.

The original meaning of "noise" was and remains "unwanted signal"; unwanted electrical fluctuations in signals received by AM radios caused audible acoustic noise ("static"). By analogy unwanted electrical fluctuations themselves came to be known as "noise".[1] Image noise is, of course, inaudible.[2]

The magnitude of image noise can range from almost imperceptible specks on a digital photograph taken in good light, to optical and radioastronomical images that are almost entirely noise, from which a small amount of information can be derived by sophisticated processing (a noise level that would be totally unacceptable in a photograph since it would be impossible to determine even what the subject was).

Types

Gaussian noise

Principal sources of Gaussian noise in digital images arise during acquisition e.g. sensor noise caused by poor illumination and/or high temperature, and/or transmission e.g. electronic circuit noise.[3]

A typical model of image noise is Gaussian, additive, independent at each pixel, and independent of the signal intensity, caused primarily by Johnson–Nyquist noise (thermal noise), including that which comes from the reset noise of capacitors ("kTC noise").[4] Amplifier noise is a major part of the "read noise" of an image sensor, that is, of the constant noise level in dark areas of the image.[5] In color cameras where more amplification is used in the blue color channel than in the green or red channel, there can be more noise in the blue channel.[6] At higher exposures, however, image sensor noise is dominated by shot noise, which is not Gaussian and not independent of signal intensity.

Salt-and-pepper noise

Main article: Salt and pepper noise
Image with salt and pepper noise

Fat-tail distributed or "impulsive" noise is sometimes called salt-and-pepper noise or spike noise.[7] An image containing salt-and-pepper noise will have dark pixels in bright regions and bright pixels in dark regions.[8] This type of noise can be caused by analog-to-digital converter errors, bit errors in transmission, etc.[9][10] It can be mostly eliminated by using dark frame subtraction, median filtering and interpolating around dark/bright pixels.

Dead pixels in an LCD monitor produce a similar, but non-random, display.[11]

Shot noise

Main article: Shot noise

The dominant noise in the darker parts of an image from an image sensor is typically that caused by statistical quantum fluctuations, that is, variation in the number of photons sensed at a given exposure level. This noise is known as photon shot noise.[6] Shot noise has a root-mean-square value proportional to the square root of the image intensity, and the noises at different pixels are independent of one another. Shot noise follows a Poisson distribution, which except at very low intensity levels approximates a Gaussian distribution.

In addition to photon shot noise, there can be additional shot noise from the dark leakage current in the image sensor; this noise is sometimes known as "dark shot noise"[6] or "dark-current shot noise".[12] Dark current is greatest at "hot pixels" within the image sensor. The variable dark charge of normal and hot pixels can be subtracted off (using "dark frame subtraction"), leaving only the shot noise, or random component, of the leakage.[13][14] If dark-frame subtraction is not done, or if the exposure time is long enough that the hot pixel charge exceeds the linear charge capacity, the noise will be more than just shot noise, and hot pixels appear as salt-and-pepper noise.

Quantization noise (uniform noise)

The noise caused by quantizing the pixels of a sensed image to a number of discrete levels is known as quantization noise. It has an approximately uniform distribution. Though it can be signal dependent, it will be signal independent if other noise sources are big enough to cause dithering, or if dithering is explicitly applied.[10]

Film grain

The grain of photographic film is a signal-dependent noise, with similar statistical distribution to shot noise.[15] If film grains are uniformly distributed (equal number per area), and if each grain has an equal and independent probability of developing to a dark silver grain after absorbing photons, then the number of such dark grains in an area will be random with a binomial distribution. In areas where the probability is low, this distribution will be close to the classic Poisson distribution of shot noise. A simple Gaussian distribution is often used as an adequately accurate model.[10]

Film grain is usually regarded as a nearly isotropic (non-oriented) noise source. Its effect is made worse by the distribution of silver halide grains in the film also being random.[16]

Anisotropic noise

Some noise sources show up with a significant orientation in images. For example, image sensors are sometimes subject to row noise or column noise.[17]

In digital cameras

Image on the left has exposure time of >10 seconds in low light. The image on the right has adequate lighting and 0.1 second exposure.

In low light, correct exposure requires the use of slow shutter speed (i.e. long exposure time), higher gain (ISO sensitivity), or both. On most cameras, slower shutter speeds lead to increased salt-and-pepper noise due to photodiode leakage currents. At the cost of a doubling of read noise variance (41% increase in read noise standard deviation), this salt-and-pepper noise can be mostly eliminated by dark frame subtraction. Banding noise, similar to shadow noise, can be introduced through brightening shadows or through color-balance processing.[18]

The relative effect of both read noise and shot noise increase as the exposure is reduced, corresponding to increased ISO sensitivity, since fewer photons are counted (shot noise) and since more amplification of the signal is necessary.

Effects of sensor size

The size of the image sensor, or effective light collection area per pixel sensor, is the largest determinant of signal levels that determine signal-to-noise ratio and hence apparent noise levels, assuming the aperture area is proportional to sensor area, or that the f-number or focal-plane illuminance is held constant. That is, for a constant f-number, the sensitivity of an imager scales roughly with the sensor area, so larger sensors typically create lower noise images than smaller sensors. In the case of images bright enough to be in the shot noise limited regime, when the image is scaled to the same size on screen, or printed at the same size, the pixel count makes little difference to perceptible noise levels – the noise depends primarily on sensor area, not how this area is divided into pixels. For images at lower signal levels (higher ISO settings), where read noise (noise floor) is significant, more pixels within a given sensor area will make the image noisier if the per pixel read noise is the same.

For instance, the noise level produced by a Four Thirds sensor at ISO 800 is roughly equivalent to that produced by a full frame sensor (with roughly four times the area) at ISO 3200, and that produced by a 1/2.5" compact camera sensor (with roughly 1/16 the area) at ISO 100. This ability to produce acceptable images at higher sensitivities is a major factor driving the adoption of DSLR cameras, which tend to use larger sensors than compacts. An example shows a DSLR sensor at ISO 400 creating less noise than a point-and-shoot sensor at ISO 100.[19]

Sensor fill factor

The image sensor has individual photosites to collect light from a given area. Not all areas of the sensor are used to collect light, due to other circuitry. A higher fill factor of a sensor causes more light to be collected, allowing for better ISO performance based on sensor size.[20]

Sensor heat

Temperature can also have an effect on the amount of noise produced by an image sensor due to leakage. With this in mind, it is known that DSLRs will produce more noise during summer than winter.[13]

Image noise reduction

Main article: Noise reduction

An image is a picture, photograph or any other form of 2D representation of any scene.[21] Most algorithms for converting image sensor data to an image, whether in-camera or on a computer, involve some form of noise reduction. There are many procedures for this, but all attempt to determine whether the actual differences in pixel values constitute noise or real photographic detail, and average out the former while attempting to preserve the latter. However, no algorithm can make this judgment perfectly, so there is often a tradeoff made between noise removal and preservation of fine, low-contrast detail that may have characteristics similar to noise. Many cameras have settings to control the aggressiveness of the in-camera noise reduction.[22]

A simplified example of the impossibility of unambiguous noise reduction: an area of uniform red in an image might have a very small black part. If this is a single pixel, it is likely (but not certain) to be spurious and noise; if it covers a few pixels in an absolutely regular shape, it may be a defect in a group of pixels in the image-taking sensor (spurious and unwanted, but not strictly noise); if it is irregular, it may be more likely to be a true feature of the image. But a definitive answer is not available.

This decision can be assisted by knowing the characteristics of the source image and of human vision. Most noise reduction algorithms perform much more aggressive chroma noise reduction, since there is little important fine chroma detail that one risks losing. Furthermore, many people find luminance noise less objectionable to the eye, since its textured appearance mimics the appearance of film grain.

The high sensitivity image quality of a given camera (or RAW development workflow) may depend greatly on the quality of the algorithm used for noise reduction. Since noise levels increase as ISO sensitivity is increased, most camera manufacturers increase the noise reduction aggressiveness automatically at higher sensitivities. This leads to a breakdown of image quality at higher sensitivities in two ways: noise levels increase and fine detail is smoothed out by the more aggressive noise reduction.

In cases of extreme noise, such as astronomical images of very distant objects, it is not so much a matter of noise reduction as of extracting a little information buried in a lot of noise; techniques are different, seeking small regularities in massively random data.

Video noise

Main article: Noise (video)

In video and television, noise refers to the random dot pattern that is superimposed on the picture as a result of electronic noise, the 'snow' that is seen with poor (analog) television reception or on VHS tapes. Interference and static are other forms of noise, in the sense that they are unwanted, though not random, which can affect radio and television signals.

Useful noise

High levels of noise are almost always undesirable, but there are cases when a certain amount of noise is useful, for example to prevent discretization artifacts (color banding or posterization). Some noise also increases acutance (apparent sharpness). Noise purposely added for such purposes is called dither; it improves the image perceptually, though it degrades the signal-to-noise ratio.

Low and high-ISO noise examples

See also

References

  1. Leslie Stroebel and Richard D. Zakia (1995). The Focal encyclopedia of photography. Focal Press. p. 507. ISBN 978-0-240-51417-8.
  2. Rohankar, Jayant (Nov 2013). "SURVEY ON VARIOUS NOISES AND TECHNIQUES FOR DENOISING THE COLOR IMAGE" (PDF). International Journal of Application or Innovation in Engineering & Management 2 (11). Retrieved 15 May 2015.
  3. Dr. Philippe Cattin (2012-04-24). "Image Restoration: Introduction to Signal and Image Processing". MIAC, University of Basel. Retrieved 11 October 2013.
  4. Jun Ohta (2008). Smart CMOS Image Sensors and Applications. CRC Press. ISBN 0-8493-3681-3.
  5. Junichi Nakamura (2005). Image Sensors and Signal Processing for Digital Still Cameras. CRC Press. ISBN 0-8493-3545-0.
  6. 1 2 3 Lindsay MacDonald (2006). Digital Heritage. Butterworth-Heinemann. ISBN 0-7506-6183-6.
  7. Rafael C. Gonzalez, Richard E. Woods (2007). Digital Image Processing. Pearson Prenctice Hall. ISBN 0-13-168728-X.
  8. Alan C. Bovik (2005). Handbook of Image and Video Processing. Academic Press. ISBN 0-12-119792-1.
  9. Linda G. Shapiro and George C. Stockman (2001). Computer Vision. Prentice-Hall. ISBN 0-13-030796-3.
  10. 1 2 3 Charles Boncelet (2005). "Image Noise Models". In Alan C. Bovik. Handbook of Image and Video Processing. Academic Press. ISBN 0-12-119792-1.
  11. Charles Boncelet (2005), Alan C. Bovik. Handbook of Image and Video Processing. Academic Press. ISBN 0-12-119792-1
  12. James R. Janesick (2001). Scientific Charge-coupled Devices. SPIE Press. ISBN 0-8194-3698-4.
  13. 1 2 Michael A. Covington (2007). Digital SLR Astrophotography. Cambridge University Press. ISBN 0-521-70081-7.
  14. R. E. Jacobson, S. F. Ray, G. G. Attridge, and N. R. Axford (2000). The Manual of Photography. Focal Press. ISBN 0-240-51574-9.
  15. Thomas S. Huang (1986). Advances in Computer Vision and Image Processing. JAI Press. ISBN 0-89232-460-0.
  16. Brian W. Keelan and Robert E. Cookingham (2002). Handbook of Image Quality. CRC Press. ISBN 0-8247-0770-2.
  17. Joseph G. Pellegrino; et al. (2006). "Infrared Camera Characterization". In Joseph D. Bronzino. Biomedical Engineering Fundamentals. CRC Press. ISBN 0-8493-2122-0.
  18. McHugh, Sean. "Digital Cameras: Does Pixel Size Matter? Part 2: Example Images using Different Pixel Sizes (Does Sensor Size Matter?)". Retrieved 2010-06-03.
  19. R. N., Clark (2008-12-22). "Digital Cameras: Does Pixel Size Matter? Part 2: Example Images using Different Pixel Sizes (Does Sensor Size Matter?)". Retrieved 2010-06-03.
  20. Wrotniak, J. Anderzej (2009-02-26). "Four Thirds Sensor Size and Aspect Ratio". Retrieved 2010-06-03.
  21. Akansha Singh, K.K.Singh (2012). Digital Image Processing. Umesh Publications. ISBN 978-93-80117-60-7.
  22. K J Sreeja & Prudhvi Raj Budumuru (2003) http://www.ijera.com/papers/Vol3_issue6/CG36496501.pdf

External links

This article is issued from Wikipedia - version of the Friday, January 08, 2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.