Showing posts with label noise. Show all posts
Showing posts with label noise. Show all posts

Friday, March 21, 2014

Why are blue skies noisy in digital photos?

A PHOTOGRAPHER ASKS, “Why are blue skies so noisy in photos?”



A noisy blue sky in a photo, greatly enlarged.

This is a common question. Here are the issues, as far as I can tell:

Skies are blue because of the process of Rayleigh scattering, where light is diffracted around the molecules of air. The higher the frequency of light, the more it is scattered: so when you photograph a blue sky, the camera’s blue color channel will be brighter than the green, and the green will be brighter than the red channel. This also explains the orange color of sunsets — when looking directly at the sun, you are mainly seeing the light which hasn’t been scattered, which is primarily the red along with some green, giving us orange colors. On the other hand, dust and water vapor in the sky will tend to scatter all frequencies of light, desaturating the blue color given us by Rayleigh scattering. I ought to note that overcast or hazy skies do not have a noise problem.

We tend to notice noise more in uniform regions, such as blue skies. The more uniform a perception is, the more sensitive we are to subtle differences in that perception. The same absolute amount of noise in a complex, heavily textured scene will be less noticeable.

Granted that there is some noise in the sky already for whatever reason, be aware that using the common JPEG file format — which is used for most photos on the Internet — can generate additional noise due to its compression artifacts — which are blocky 8x8 pixel patterns. Again these will be more visible in areas of uniform color. The greater the compression amount, the more visible the blocky patterns. JPEG can also optionally discard more color information, leading to even more noise.

The color of a blue sky can often be close to or outside of the range or gamut of the standard sRGB and Adobe RGB color spaces — the result of this is that the red color channel will be quite dark and noisy — unless you overexpose the sky, making it a bright, textureless cyan or white. This is most obvious with brilliant, clear, and clean blue skies, such as found in winter, at high latitudes and altitudes, and when using a polarizer. At dusk, the problem is probably worse.

Depending on the camera and white balance settings, the red color channel will be amplified greatly, increasing its noise greatly, and we already know that there will likely be significant noise in the red channel already, so this just makes things worse. Also, the blue color channel might be amplified also, increasing its noise. Also consider that most cameras have double the number of green-sensitive sensels compared to the red or blue variety, leading to more noise in those color channels.

Human vision is sensitive to changes in the blue color range. Small changes in the RGB numbers in this color range are going to have a larger visual sensation than with some other colors. So a relatively small amount of noise will be more visible in the color of a blue sky.

In order to create a really clean image from a camera’s raw data, high mathematical precision in the calculations is needed, as well as the ability to accept negative or excessive values of color, temporarily, during processing, which is called “unbounded mode” calculations. Now this can make raw conversion quite slow, and so many manufacturers take shortcuts, aiming for images that are “good enough” instead of being precisely accurate. But the result of using imprecise arithmetic is extra noise, along with possibly other digital artifacts.

So the problem of blue sky noise is a nice mixture of physics, mathematics, human physiology and psychology, technical standards, and camera engineering.

Tuesday, July 17, 2012

At the Limit of Perception

MANY PHOTOGRAPHERS AIM FOR exceptionally clean images, low in noise, and high in dynamic range. However, extreme sensor sensitivity is rarely needed for most photographs, especially if the photographer sticks to the basic rules of photography, which include the practice of using good lighting.  A good, bright primary source of light, along with perhaps fill-in lights or reflectors, are typically needed to get good photographs.

But consider this photograph of a canoe, taken about 45 minutes after sunset, on a moonless, starless night, illumined by the waning skylight, distant fireworks and lightning, and a lone incandescent lamp a hundred or more yards away:

DSC_5231

This was an interesting scene to my eyes, but there isn’t much to see in my image — just a very faint outline of an object.  You might have better luck seeing something if you click the photo twice to see it in Flickr with a dark gray background.

I took this with my camera mounted on a tripod, but because I could hardly focus at all, I set the aperture to f/8 for greater depth of field, and I didn’t use a long exposure time because I didn’t want to spend 2 hours getting my photo — one hour, perhaps, for the exposure, and one hour for dark frame subtraction. Sometimes it is inconvenient or even impossible to get a good exposure, so you have to make do with what you can get. I wanted to see how good of an image I could get at the limit of the camera’s performance.

Tuesday, May 3, 2011

Zillions and Jillions

PLEASE CONSIDER the absolutely most minimalist camera possible. This camera will have precisely one pixel or light sensor location, and it will generate a photograph that has a bit depth of exactly one binary digit: black or white. This camera can produce precisely two images:

Untitled-1

This maximally minimalistic digital camera is actually useful. These are incorporated into proximity switches; devices that answer the question is something there? You might find them on conveyor lines in factories, or on automatic door openers.

Instead of just one pixel, let's consider a camera with four. Here are all possible images taken with this sort of camera:

4-bit

With just a 2 by 2 pixel array, at one bit depth, we are able to take 16 different photographs.

We can easily calculate the total number of individual photographs that can be possibly taken by a digital camera capable of displaying only black or white:
  • 1 bit camera = 2 photos
  • 2 bit camera = 4 photos
  • 3 bit camera = 8 photos
  • 4 bit camera = 16 photos
We add one bit and we double the number of possible photos we can take with our camera. This number gets very big very fast. Suppose we take a very low end digital camera, 500x500 pixels = 250,000 total pixels. Even if we limit this camera to using only black and white pixels, we end up with a huge total number of possible images:

Total images possible =
  • 2 x 2 x 2 x ..... (multiply a total of 249,999 times)
  • = 2250000
  • =  3 followed by 75,257 zeros, plus a bit more.
By comparison, the total number of elementary particles (electrons, protons, etc.) in the entire universe is estimated to be a 1 followed by 80 zeros, or 1080.

Suppose you have an entire universe of particles, and then you give every particle its own universe of the same size, and in each of those universes, you assign a similarly sized universe for each elementary particle inside them, and repeat the process nearly a thousand times. That's how many unique photographs you can take with your cruddy 500x500 pixel 1-bit depth digital camera.

We can get large numbers when we examine matter, but information is another class of being in itself, vastly larger than mere matter. It is for this reason that philosophers have posited that information resides in a realm above, beyond, or outside that of mere matter.

The following ought to convince you that an image 500 pixels on a side with 1 bit pixels can be quite rich:

Union Station in Saint Louis - 1 bit depth image

My fellow Saint Louisians ought to recognize this scene. 1-bit images are generally quite useful, and have been used for decades in copying machines.

Even a 1 bit depth digital camera can produce an astounding number of different images. But suppose we add color — the number of possible images becomes even more staggering. Take my lowly Nikon D40 camera; it has about 6 million sensors, and if I shoot JPEG, I get 256 possible values per sensor. The camera has about 3 million green sensors, and 1.5 million sensors each of red and blue; full color is mathematically estimated for each pixel location.

So for each pixel location, we multiply by 256. The total number of images = 2566016000 which is approximately equal to a 1 followed by 14,487,972 zeroes. This is a mere pittance compared to the Seitz D3 digital scan back, which can capture 500 megapixel images with 48 bit color per pixel:
  • Total = (248)500000000 = 1 followed by 7,224,719,896 zeros.
I think it is safe to assume that like snowflakes, no two photographs are alike.

But this vast potentiality of photography ought not get us puffed up with pride in our creativity. Alas, the vast, overwhelming majority of these theoretically unique photographs look like this:

Random colored bits

Uniform random noise. A trillion monkeys, each generating a trillion random images per second for a trillion years, will likely never once produce anything that looks like a photograph. You could call this an ‘image’, but only in the most general terms. At best, images like this are only an exercise in conceptual art, and then only for the first photographer who does it. And I just did it.

[Note: to generate a random noise image such as this in Photoshop, be sure to start with a 50% gray image. If your starting point is a white image, then half of your final pixels will be white, and if you start with a black image, then half your pixels will still be black. Using the Filter->Noise->Add Noise... function, add 50% Uniform noise. Be sure to turn off ‘Monochromatic’. I find I get better results if I do this for each color channel independently.]

We call an image like this ‘random’ because it doesn't look like anything in particular. Perhaps a supremely intelligent being can look at this and be able to immediately discern the specific algorithm used by Photoshop to generate this. But we can't; we are finite creatures, and so the perception of noise is very much a human phenomenon. Perhaps we can imagine some figure in this noise, but that is tenuous at best. Actually, the entire concept of randomness is problematic: see my article here.

But we are good at perceiving some image if the noise isn't too great, especially if we are familiar with the subject:

Mystery image in noise

That is an image with a 1:10 signal/noise ratio. Can you guess the subject? What does your gut say?

I've never before attempted to estimate signal to noise ratios for digital images. Here, I've created a series of images with varying relative amounts of noise:

Signal to noise

Clearly, high ISO images, and heavily manipulated images, can have very high relative amounts of noise, much more than I suspected.

A 1:4 signal-to-noise ratio is roughly the limit I'm able to handle, while surprisingly a 1:1 ratio isn't horribly bad for some purposes. But please note that noise found in actual digital images is not uniform; rather it lurks most of all in the shadows, or rather, the signal-to-noise ratio is smallest there, for highlights have a large absolute amount of noise, even though it is small relative to the signal.

Suppose we are willing to accept a 4:1 signal-to-noise ratio, which means that 1/5 of our bits are just noise. For this image, which is 500 pixels on an edge, with 48 bits of color, that means that about 10 bits are noise.  We can calculate:
  • Number of acceptably noisy images equivalent to a reference image = (210)250000 = a 1 followed by 752,576 zeros.
2% total noise is virtually undetectable by the human eye, unless you look very closely. Also, slight adjustments or rotations of images are hard to see:

Four variations

For our 500 x 500 pixel full-color image, I estimate that there are approximately 4 trillion largely undetectable variations for every reference image. For larger images, the number goes up to values never heard of even in government economics.

OK, so I assert that of all theoretically possible images, the relative number which would be recognized as photographs is vanishingly small. Fortunately, since we are dealing with almost unimaginably large numbers, this isn't an issue. Roughly estimating the number of photographic images possible ought to be doable.

Patterns make an image. Either there are significantly large adjacent patches of pixels that are very similar, or there are similarities between pixels even though they are widely separated. Or, we can find a pattern that repeats on various scales.

Look at this image:

symmetry

It is pretty obvious that we have symmetry of the left and right halves, as well as the top and bottom halves. Recall that the total number of possible variations of a 500x500 pixel, 1 pixel depth image is represented by a 3 followed by 75,257 zeros.  Because we have mirror symmetry, we basically repeat a 250x250 pixel image four times:

Total number of images:
  • = 2250 x 250 = 2 62500
  • = about 1 followed by 18184 zeros
This is an enormous figure, but is only 1/(1056433) fraction of the total number possible. We will be able to see the mirror reflection in nearly all the images produced: we would not perceive the symmetry if the image is all black or all white, or if the pattern is symmetric to begin with, but this is a very small proportion of the total.

With this 1 bit depth image, I find that I can only detect symmetry when I have greater than a 1:1 signal to noise ratio. We also should be able to detect other variations, such as changing the location and orientation of the axes of symmetry — all we have to do is ensure that our axis of symmetry is not too close to the boundary of the image.

The eye can detect global patterns such as seen above, and also local patterns, where there is some correlation between adjacent pixels:

local

Within a 500 x 500 pixel image, we can produce a total of 400 x 400 = 160,000 different images of a black 100x100 pixel square; our numbers go up if we can accept slight variations in the color and size of the square and its orientation. But if we are willing to accept some noise, as we see here, we can get zillons of possible images.

No two photographs are alike, and even if you take multiple shots with an ordinary camera under controlled conditions, there is no chance in your life that your resulting images will be exactly the same. This can be helpful: if you take multiple shots, you can blend them together to greatly reduce the amount of noise visible in the final image. Super-resolution techniques can also use multiple images to construct a higher-resolution final image, and can even remove diffraction artifacts.

The sharpness of lenses is a major limiting factor for producing unique photographs. The blurring found in some optics means that adjacent pixels are often strongly correlated, even if they are supposed to capture a high-contrast edge. Chromatic aberration and the mysterious purple fringing also lead to greater correlation between pixels, reducing the originality of photographs. Same goes with noise reduction software. Extreme, high quality bokeh, found in excellent portrait lenses, can reduce the out-of-focus background detail to a nearly uniform blur; this lack of detail leads us to focus our attention on the subject of the photograph. With extreme blurring and the use of the Photoshop Threshold tool, we can even reduce our images to match the 1-bit camera demonstrated at the top of this article.

Despite, or rather because of, the huge variation of possible photographs, it is relatively easy to detect unauthorized duplicates of images. Forensic analysts can detect duplicates with overwhelmingly high certainty, even if the original image was severely altered. Likewise, the use of the Clone tool in Photoshop — this copies one part of an image to another part — is easily detected if a large enough area is cloned, even if it is visually blended well, because these kinds of correlations within an image can not be practically attributed to chance. Never in a jillon years can we expect something like that to happen on its own.

If two photographers with two different cameras each take a photograph of the same scene, they will be different from each other in a huge number of minor (or even major) ways, so much so that we can be absolutely certain that the two images are in fact different. On the contrary, with a reasonably complex scene and good image quality, it seems that we ought to be highly confident that these two photographs were in fact photographed at the same time, and we also ought to be able to detect if the scene was artificially recreated at a later time, or was adjusted in Photoshop.

I think this discussion of seemingly impossibly large numbers tells us that photography, and digital art in general, is an incredibly rich and humanly inexhaustible medium.

[Click here for a discussion of names for long numbers. As it so happens, practically the only time these names for long numbers are actually used are in lists of names of long numbers. The word ‘zillion’, even though it is almost meaningless, is a good enough name for the quantities we are discussing here.]

Thursday, January 27, 2011

White Balance, Part 1

PLEASE CONSIDER THIS photo of my living room:

Ritz-Carlton - camera white balance
(Or rather, this is a photo of the lounge at the Ritz-Carlton in Clayton, Missouri.)

Like very many interior photos — taken without a flash — this has a yellowish color cast. Undoubtably this yellow cast is due to the color of the lighting. Obviously incandescent light has a slightly yellower color of light than daylight. So there is no surprise that incandescent photos appear yellow.

Perhaps you think that the white balance feature on your digital camera is simply a minor adjustment. Perhaps it corrects for the slightly yellower color of incandescent lighting. Certainly I thought so.

Here is a white-balanced version of the photo above. I adjusted the photo so that a white object under this lighting was measurably neutral: that is, the red, green, and blue color values of the white object were equal after adjustment.

Ritz-Carlton - neutral white balance

It looks a bit better: the yellow color cast has been removed. A small difference, but I think it improves the photo a bit, for it brings out the variety of colors better, and it is closer to what I remember seeing. I recommend always using a neutral white balance unless you have a specific reason not to do so.

Now consider this photo:

Ritz-Carlton - Daylight white balance

This is very yellow. But this so happens to be the scene assuming a daylight white balance. Were I to take a photo of a white object under bright daylight with this balance, it would look white. Incandescent lighting is actually far more yellow/orange in color than is daylight.

You may not be aware that your eye does an automatic white balance: when you look at a scene, your eyes attempt to subtract out the color of the lighting, and your eyes do a pretty good job of this, up to a point. Cameras work the same way; they also attempt to automatically subtract out the color of the light.

Keep in mind that perceptibly slight changes in the color of light may translate to a radically different objectively measurable difference in color.

Here is a photo taken outdoors, with a Daylight white balance:

Snow - Daylight white balance

And the same photo, with the color balance set to incandescent lighting; it is the same white balance I used in the nicely-corrected interior photo #2 above:

Snow - Incandescent white balance

We see that the camera's white balance corrects for extreme differences in the color of light, not merely minor differences. How the eye corrects for white balance is a mystery, and the technology behind a camera's automatic white balance feature is ultimately imperfect, which is why I do a manual white balance whenever I'm doing my best work.

White Balance and Noise

Digital cameras have an intrinsic, fixed white balance. The color data captured by the camera sensor is adjusted by the camera's computer according to the white balance setting. Here are our sample pictures again, showing what they look like when using my camera's native white balance:

Intrinsic camera white balance

The images are greenish because each green sensor is more sensitive under most lighting conditions than the others.  For more details, see the article What Does the Camera Really See? For an overview of the color system used by digital cameras, read Color Spaces, Part 1: RGB. Digital cameras generally are rather insensitive to blue light.

To correct for the green snow scene above, the camera must amplify both the red and blue color channels to match the green, and whenever you amplify a signal, you also amplify noise.

For the interior scene, the camera must amplify the red channel a little (since incandescent lighting has plenty of red light), while it has to amplify the blue channel a lot, creating plenty of noise.

Here is an extreme example of this sort of noise amplification:

church - three channels

I took this a number of years ago with an inexpensive point-and-shoot camera. While the red and green channels have a lot of noise, due to the camera being set to ISO 1600, the blue channel at the bottom is particularly bad. The blue channel, due to incandescent white balance, was greatly amplified relative to the other channels, and so it shows plenty of noise.

Generally speaking, if you want low-noise interior photos, you are asking a lot of your camera, and you will likely spend a premium to get this.  Read this for details: One Easy Rule for Quality Images.

White Balance and Exposure

If you don't set your white balance correctly, you risk bad exposure if you shoot JPEG. Please note that I define good exposure by taking all three color channels into consideration; see the article Three Opportunities for Overexposure for details. If even one of your three channels is significantly overexposed, you will get shifts in highlight color.  In an extreme case, here is the daylight photo set with an incandescent white balance:

Histograms

This is a selection from Nikon's View NX2 software, showing the three color histograms at this white balance. Each graph has the dark pixels to the left, and the bright pixels on the right: we see here that the red channel is underexposed, white the blue channel is overexposed.  Due to poor white balance, we irrevocably lost both highlight and shadow detail.

The histograms seen here are similar to the three color histograms found on many digital cameras. Like many photographers, I usually check the color histograms to make sure that my photos are exposed properly. In order to use as much of the camera's dynamic range as possible, I try to expose the images as much as I can without overexposing any one of the three histograms.  However, were I to use this process with a poor white balance, I would inevitably get a severely underexposed image. In the image above, reducing the exposure to preserve the blue channel highlights would lead to a severely underexposed red channel:

Histograms - under exposed

Now you might just want to have this ‘look’ in your photo, and that's fine. But if you instead plan on ‘fixing in Photoshop’, forget it.

Please  note that you will see the same problem if you shoot incandescent lighting with bluer white balances — Daylight, Cloudy, and Shade — except your red channel will likely be greatly overexposed while your blue channel will be underexposed.

As a general rule, you will get the best exposure if you use a neutral white balance. You can expose the image longer with less noise and less chance of clipping highlights if you set your white balance precisely.

But please remember that cameras have fixed intrinsic white balance, as seen above. You get a greater risk of overexposing or underexposing color channels when you shoot JPEG images, because the camera throws away lots of its original sensor data when performing a white balance — and you risk throwing away good useful data if you set your white balance wrong. For this reason, I shoot RAW images (which retain all of the original sensor data), because I can adjust the white balance on my computer after the fact. The risk of bad exposure is lessened — although not eliminated — when you shoot RAW and so you still ought to be careful at the time of shooting.

The trade-off is that RAW files need to be processed on your computer to produce an image usable for either printing or displaying on the web. I find the trade-off acceptable, although I know that many photographers do not.

Some authorities state that since a camera has a fixed intrinsic white balance, then the camera exposure histograms ought to show the RAW color channels. I think this is an excellent idea. Some photographers attempt to do this by forcing the camera to use a white balance that does no color adjustment at all: their histograms in this case are correct. The UniWB method of using the camera's own intrinsic white balance was developed by Iliah Borg and others, and the method is described here. It is not for the faint-hearted, since all your photos will turn out green, and you will have to correct for white balance on your computer. You can however get better exposure. At the very least, this is a good learning tool, if not really practical.

Note to camera manufacturers: please show the RAW histograms when shooting RAW! Also, give the ability to zoom in to the brightest pixels on the histograms, for overexposure in digital photography is worse than underexposure. When you show blinking pixels, be sure that they blink when even one of the channels is overexposed.

When to White Balance and When Not to White Balance

As I mentioned before, I recommend always setting your white balance precisely unless you have a specific need to do otherwise.

Have a look at A Digital Color Wheel. Colors opposite from each other are called opponent colors: a color balance biased towards one edge of the wheel will aways be at the expense of the opposite edge. If your image is too blue you will not get enough yellow, and too much green will mean too little magenta. If the white balance is in the center you will trade off quantity of color for quality of color; a well-white-balanced image will look richer in color content, and you have less risk of having your colors go out of gamut. You can get better results in saturating the colors if your white balance is precisely in the middle.

Sometimes you may want to capture a scene as you remember seeing it; but don't forget that your eye already does strong white balancing. So if you want to capture the warm glow of candlelight, set your white balance to somewhat warmer than neutral. To capture the cold mood of a snowy day, set your white balance to a somewhat cooler balance than neutral. Although your eye does do white balancing, this mechanism doesn't work well under dim lighting, although I am uncertain as to what the actual relationship might be. This is worth further research.

Here is a photo where I did not want the camera to subtract out the color of the light:

City Museum, in Saint Louis, Missouri, USA - Fantastical beast in blue light

A fantastical beast, at the City Museum in Saint Louis. Click the image twice to view it on black.

Sometimes you may want an image with a strong overall color tone, but you may get better results if you first convert the image to black and white, and then add a tone afterwards, not relying on white balance.

Mixed Lighting

Your eyes not only do an automatic white balance, but they adjust this white balance variably across the scene while you are looking at it. When you take a photo of a scene, and reduce that panorama down to a tiny, low-contrast image displayed on a page or on a screen, this automatic white balance hardly operates, which is why you have to get it right in the camera. A severe problem occurs when you have mixed lighting of multiple colors, for example, when you shoot an interior, illuminated with incandescent lighting, while also having windows to the outside appearing within your photos.  Invariably your windows will look fine with the interior too yellow, or the interior looks fine while the scene outdoors is very blue. The photograph just doesn't look as you see it in real life.

With architectural interiors, I will use daylight white balance for the windows and incandescent white balance for the interior, and then composite these versions of the image. The results are good, even though this is a tedious process. Big-budget cinematographers will put large yellow-colored gels over the outside of the windows so that the color of the transmitted daylight matches the lighting used in the interior.

Far more problematic is when fluorescent lighting is used in an interior. Not only do these lights have an odd color — typically they are simultaneously more yellow and more green than daylight — but fluorescent colors are not constant across brands of lamps. They do not provide a continuous full spectrum of light, and even more problematic, they change color as they quickly flicker 50 or 60 times per second. Almost invariably, if you attempt to white balance fluorescent lighting, you will get strong shades of the opponent colors green and magenta throughout your image. These green/magenta color casts will be considerably increased if you also have daylight or incandescent lights in the scene. If at all possible, I will turn the fluorescent lights off.

Also see the article: White Balance, Part 2: The Gray World Assumption and the Retinex Theory.

Monday, July 5, 2010

What Does the Camera Really See?

X-Rite ColorChecker

Here is the familiar X-Rite ColorChecker, a handy color calibration target, which despite undergoing many name changes over the years, is well-supported by the photography industry. I took this as a RAW image, and processed it in Adobe Camera RAW (ACR) using a custom profile generated from this image. The colors look pretty good. I adjusted the exposure and black point a bit in ACR, so that the darkest and lightest neutral patches have their correct luminance value. I took this image under incandescent lighting, with an estimated color temperature of 2900 Kelvin; the camera measured the white balance from this card, and it looks quite accurate.

Human color vision is poorly understood, being subject to many conflicting theories, but this is understandable since biology is inherently messy. Ultimately we are subject to one of the greatest puzzles in philosophy: “Know Thyself”, which is found in Plato and other Classical literature. As knowledge of a thing is not a part of the thing itself, self-knowledge is problematical at best. But even if we plead ignorance about our own vision, we do have certain knowledge that digital cameras do not model human vision very well.

Digital cameras are made to be inexpensive and easy to mass-produce, and produce images that conform to widely-supported industry standards. These are not designed by biologists, psychologists, or philosophers, but rather by electrical engineers who follow the practices and principles of their profession. This is as it ought to be, but we also ought to expect that cameras won't record things precisely as we remember seeing them.

500px-Bayer_pattern_on_sensor.svg

This illustration is originally from Wikipedia and is not my own work. Click here for source and attribution.

This is an illustration of the Bayer Array, the most common method of distributing light sensors on a silicon chip. Individual sensels, specifically sensitive to red, green, and blue light, are systematically arrayed across the silicon chip. Dedicated computer algorithms, found in the cameras' embedded computer, analyze adjoining sensels to estimate the color and intensity of light that falls on each spot. All of these taken together comprise the final image — after some additional processing.

Note that there are twice as many green sensors than either red or blue sensors. More sensors mean better detail and less noise in that channel. This is justified by the fact that human vision is most sensitive in the green region. Generally speaking, the green channel typically has the most natural looking luminance, while the red channel tends to be too light and the blue too dark. After all, luminance is more important than color.

Let's take another look at the image above, but showing it more as the camera actually recorded it:

X-Rite ColorChecker - UniWB

I processed this using the excellent free Macintosh software package, RAW Photo Processor. I developed this RAW image to closely represent what was actually recorded by the camera. Here I used UniWB as the white balance, which give us the colors as actually recorded without adjusting for the color of the light.

Because half of the camera's sensors record green light, this image has a green color cast, and since this photo was taken under incandescent lighting (which is more yellow and less blue than daylight), it also has a yellowish color cast. Either the camera's computer, or Adobe Camera RAW as in the top photo, will adjust the RAW image so that we get roughly equal red, green, and blue values on each of the neutral patches seen on the bottom row of the X-Rite calibration target.

The image is also quite dark compared to the corrected version. This is because the camera is linearly sensitive to light, and can only faithfully capture a rather limited range of light levels in one exposure, unlike the human eye. Typically, a digital camera will apply a Gamma correction to the raw sensor data to generate the final image, giving us plausible mid-tones. In RAW Photo Processor, I used a Gamma value of 1, which does no correction: digital cameras usually use a Gamma value of 2.2, and accordingly this is the default setting of the program.

So the camera does a lot of processing behind the scenes to generate our final image. I loaded the greenish image into Photoshop to see what kind of curves are needed to produce the corrected image. I adjusted the Photoshop curves so that the adjusted values of the neutral patches roughly matched the published values for this target. This is what I got:

X-Rite ColorChecker - UniWB - adjusted with curves

The neutral targets look fine, but the colors are a bit off, especially red tones; obviously there is additional image processing going on: see the article Examples of Color Mixing.

Here are my curves for this image in Photoshop:

Curves applied to UniWB

My oh my, there is a lot of adjustment going on in this image.  The diagonal line on the graph indicates the line of no adjustment; whatever is below the line has been brightened, and whatever is above the line is darkened.  The steeper the line, the more adjustment. We can deduce a few things from this graph:
  • The photo was a bit underexposed, since all the color channels needed to be brightened.
  • The blue channel is severely deficient, and needed a very steep curve.
  • Incandescent lighting tends towards the red color. If we didn't have two green sensels for each red one, the red channel would have been brighter than the green channel.
  • Getting good white balance does not guarantee that colors are correct.
The blue line was so steep that Photoshop did not offer enough precision to adjust the curve accurately! Were I not trying to illustrate this point, I would have first adjusted the levels coarsely, and afterwards adjusted the curves more precisely.

We ought to be worried about the steepness of the lines. Increased contrast — that is, steep curves — means increased noise. Brightening is a form of amplification, and amplification necessarily increases noise. For this reason, I recommend using Photoshop in 16 bit mode when applying severe curves to images, and also using RAW instead of JPEG images if much processing is foreseen, for both of these methods retain more information about the image.

However, this also illustrates that severe curves are applied to the raw sensor data within the camera itself, which increases noise. Now, this image came from a Nikon D40, which is known for having low noise — even at ISO 1600 as was used here — and I did perform slight noise reduction over the image. However, under more extreme conditions, like dim low-wattage bulbs, or with an inexpensive compact camera, we can expect lots of noise in the blue channel under incandescent lighting.

For critical work, this problem is lessened by using low ISO, shooting in RAW, using a tripod, and blending together multiple exposures so as to get an excellent blue channel. Or alternatively, use lots of high quality supplemental light, such as from strobes.


I am of divided opinion as to effectiveness of the Bayer array. It does not appear to provide optimal performance under the any of the most common lighting conditions, nor does it appear to be an optimal compromise offering good average performance. I tend to think that blue light, under almost all conditions, is under-measured by this kind of sensor. I may be wrong because illuminance — which is primarily measured by green light — is more important than color, and so perhaps a camera sensor deserves to have more green sensors. I just don't know. The Bayer array has the advantage of being compact, as it has a repeating 2x2 pattern, which is more compact than every other proposed array and so will have the greatest color accuracy for each final pixel of the image. However, we ought to consider the Foveon sensor, which records full color at every sensel, and so does not display any demosaicing errors as found with the Bayer array and its rivals.

In dim lighting, human vision becomes even more sensitive to blue light, and so diverges greatly from what is seen by the camera, but that is a problem best considered in another post.

Monday, June 28, 2010

There is Detail in Noise

NOISE IS THE ENEMY in digital photography. Inexpensive compact cameras produce decent enough images in broad daylight, but when light dims, noise increases greatly until the image becomes quite ugly. Understandably, many photographers are interested in computer software that will reduce noise in digital images, but they also ought to also know the limitations of noise reduction.

It is useful to see how digital noise works, so I contrived an image that shows noise in a fairly pure way, unlike what we see in actual digital photographs. Following is an example where I added Gaussian color noise uniformly across the image, without regard to color or luminance. From top to bottom, we have no added noise, then 25%, 50%, 100%, and then 200% noise at the bottom.

There is Detail in Noise 1

Viewing this, it is pretty obvious that noise reduces contrast, and that high contrast objects resist the effects of noise the best. The word ‘THERE’, — black on white or white on black  — is recognizable even at the highest noise level, while blue on black becomes quickly unrecognizable. You can click on the image get a larger version.

But it is clear that there is noticeable detail even in severe noise. This consideration is important when devising methods to reduce noise, and when deciding how much noise to remove from an image.

When we view extremely noisy images, we are at risk of making two types of errors. We may perceive a signal in the noise when in fact nothing but noise actually exists: this is excess credulity and is called a “Type I error”. Likewise, we may not perceive a signal when in fact one does exist: this is excess skepticism and is a Type II error. Noise reduction software makes the same kind of decision: every pixel is judged as being some combination of signal or noise, and the final image produced may exhibit errors of both kinds. An excessively skeptical noise-reduction technique will generate images with very little noise, but with little detail also. An excessively credulous technique will pick out false detail, creating ugly artifacts.

In this sample image, a fluent reader of English would be able to pick out the words in the noise better than someone who is not familiar with English. I included the odd symbols at the bottom so I could include other colors, but it is also clear that familiarity with a symbol helps aid our recognition, and so these symbols seem to suffer particularly poorly under noise.

This example points out a severe limitation of noise reduction technology. We already know what the words are, since we are giving a noise-free example image at the top. Would we be able to determine the words if we are presented with only a noisy image? Our mind certainly ‘fills in’ missing information when we know for certain what that information must be. Furthermore, natural languages have much redundancy in grammar and spelling, and so this helps us to identify noisy unknown sentences if we are familiar with the language.

Noise reduction therefore quickly becomes an ‘AI-complete’ problem — that is, it requires a computer with full human intelligence. If we insist that our noise-reduction algorithm be perfect, then we are asking the computer to have the knowledge of God. Rather, we must be humble and instead accept general-purpose solutions that reduce enough noise to produce an image that is good enough; or, we must accept that our image is too noisy and instead re-shoot the subject using better equipment or technique.

There are special-purpose noise reduction algorithms for character recognition; these take into account the shapes of letters, common dictionary words, and basic grammar.  These algorithms will allow a computer to scan and read a book, but these will be completely unsuitable for general photography.

Fortunately, the image above is quite contrived, and the noise I added does not have the characteristics that we find in a digital photo. I provided that image for the purpose of demonstrating the effects of noise in general, without the complications found in a photograph. However, I would challenge any interested reader to download that image, and try various noise-reduction techniques on it. I failed to significantly reduce the worst noise using various techniques — and ugly digital artifacts or side-effects of these attempts are quite obvious:

There is Detail in Noise 3

From top to bottom are: a section from the original image; the same section processed with Photoshop Reduce Noise; then Noiseware Professional; Surface Blur of chroma channels; and finally the popular Dust and Scratches noise reduction from Photoshop. Note the color shifts and poor performance of all methods. Are any of these results better than then original noisy image? It seems not.

Following is a generated image that has a noise profile closer to what we find in natural photographs. Here, noise is great in the shadows, and is slight in the highlights. There are different levels of overall noise in each channel, similar to what is found with photos taken under incandescent illumination: green has the least noise, red a bit more, and blue has much more noise.

There is Detail in Noise 2

The default settings of Noiseware Professional does a decent job of denoising this image, and it could be easily cleaned up nicely with a bit more effort:

There is Detail in Noise 2b

The limits of noise-reduction technology tell us that we have to be careful to avoid noise when taking a picture, and that we have to use intelligence when applying noise reduction, and not to fully rely on the computer. If we know for certain what the signal ought to be, then we can do manual retouching: in the image above, I would retouch the blue-on-black word NOISE, and do general blurring of the dark background. If our job is to produce a good image, and not necessarily a faithful one, then artistic judgment is needed to create plausible detail with little noise.