Monday, September 20, 2010

Over and Under Exposure

GETTING EXPOSURE RIGHT is one of the challenges — and annoyances — of photography. I had long experience with black and white film photography and so I thought I had a pretty good understanding of exposure and how to get a decently-exposed image. When I got into digital photography back in 2001, I was quite disappointed with the results — the automatic exposure was often very wrong among other problems — and I had the bad opinion that it was the camera's job to set itself properly. You can read more of this on my old posting A Camera Diary.

Besides thinking that good photography merely involves choosing the ‘best’ camera, I was quite naïve about the properties of color digital images, and how they differ from black and white film. Exposure is far more critical to color photography relative to black and white.

Please consider the following series of images, taken at ISO 200, f/8, with each exposure time varying from 1/8th for the darkest, to 8 seconds for the brightest. This Beaux-Arts building was built in 1900 for the Saint Louis Club, later became the headquarters for the Woolworth's company, and now houses the Saint Louis University Museum of Art.

Saint Louis University, in Saint Louis, Missouri, USA - Saint Louis University Museum of Art at dawn - side-by-side composite of 4 exposures

Which image is exposed the best? Certainly exposure is something of a matter of taste, and your particular monitor settings may make one look better than another, and you might change your opinion if you used a different computer or if you printed these. However, too much exposure will give you all white, and too little exposure will give you black, and then you no longer have an image of a building. Objectively speaking, you have to expose within a specific range, which will vary depending on subject matter, your camera, and your post-processing.

If I had to choose between these four images, I'd select either the upper right hand image, or the lower left hand one; although I think that an intermediate exposure between these two would have been better. I took this in the morning, and perhaps I ought to have waited a few minutes for the sky to get brighter, which would have given a better balance of light over the entire image.

OK, I might say that I'll choose whatever image looks best to me; of course you do have to do that. Just because a machine says that one image is better than another doesn't mean that we have to follow that advice, because photographs are intended for humans, not machines. Just because the camera says that the photograph is correctly exposed doesn't mean that it will look best to us. But limiting ourselves to just gut instinct can't be right: “The unexamined life is not worth living for a human being” wrote Plato in the Apology. Instead, we ought to ask some questions. Why does one image look better than another? How can we reliably and predictably make good images?

Just because an image appears to be a bit dark does not mean that it is bad — there is a lot of detail that can be pulled up from the shadows. Generally, overexposure is more of a problem with digital images than underexposure, and so the standard advice is to expose for the highlights and process for the shadows. By the way, this is opposite to the advice used for shooting film negatives, where you generally have to expose to get good shadow detail.

Let's pull up some shadow detail from the upper right hand image:

Saint Louis University, in Saint Louis, Missouri, USA - Saint Louis University Museum of Art at dawn - lightened shadows

I think this is an adequate image. Lots of detail is normally lost in the shadows, and it is easy to make this detail visible. This is just a rough brightening, and there are lots of techniques to show good detail in shadows. Were I doing a better job, I'd add more local contrast in the shadows; these shadows look a bit flat.

My intention when taking these photos was to produce a series of images which I would later blend together to make decent single image with lots of highlight and shadow detail with little noise and good color rendition. Before I submit the images to my exposure-blending software, I create hard exposure masks which cut off those parts of the images which are over- and under-exposed; the end result is a nicer looking image with low noise and good color tone. Without these masks, the software produces unpleasant color shifts in the highlights and excessive noise in the shadows. Masking also reduces haloing artifacts generated by my software.

Overexposure

It would be helpful if we define our terms. A pixel in an image is overexposed if any one of the three color channels is at its maximum value; or for eight bit images, if any channel is equal to 255. Now 255 might just happen to be the correct value of a pixel, but that is unlikely, since everything brighter will be equal to 255 also. If any one of the three color channels clips due to overexposure, then you will get color shifts in the final image.

This color shift is rather prominent on the brightest of my sample images.  Here is a close up view; note how the color of the building near the light goes from a nice orange color to yellow, and then white; while the blue sign goes to cyan then white:

Saint Louis University, in Saint Louis, Missouri, USA - Saint Louis University Museum of Art at dawn - overexposure masks

In the upper right hand corner, I put in black wherever any one of the three color channels of a pixel is equal to 255. In the lower left hand corner is a mask which shows wherever the RGB luminosity goes to 255; notice how it masks out a smaller area than the full overexposure mask.

RGB luminosity is roughly defined as:
30% Red + 59% Green + 11% Blue
This approximates the sensitivity our eyes have to each primary color.  But this value will often be less than 255, even if one of the channels is overexposed. Some camera histograms will show this value instead of three individual color histograms, which can be less than helpful. Also, some exposure blending and tone-mapping methods use this value as an estimate of brightness, and the final images often show these color shifts.

In the lower right hand corner I superimposed the full overexposure mask on the image. Note that it covers up nearly all of the areas that show an obvious color shift, but not all.  There appears to be some bad color bleeding out from around the edges of the mask.

This image, even though it comes from a Camera RAW file, still has been highly processed by the RAW converter, plus I did some lens distortion correction as well as straightening of the image. My camera uses a matrix of light sensors, and each one is sensitive to only one color.  When the RAW converter makes the final image, it estimates the missing colors at each pixel by examining neighboring pixels. Likewise, when correcting for lens distortion and camera tilt, Photoshop estimates the correct pixel values by also examining neighboring values.  So we are always doing some averaging; but consider this example equation:
Estimated value = (250+240+235+garbage)/4 = garbage
So the effects of overexposure anywhere in an image will spread a bit to neighboring pixels. In practice, when I make a mask like this, I will mask out everything that has a value over 250 or so, which seems to get rid of most if not all of these nominally good, but actually bad pixels, without losing too much highlight detail. Someday I'd like to see software that offers a mask channel associated with images, which will show all pixels which are overexposed, or which are indirectly unreliable due to processing.

Here is the brightest image, with a black overexposure mask superimposed on it:

Saint Louis University, in Saint Louis, Missouri, USA - Saint Louis University Museum of Art at dawn - extended overexposure mask

I extended the mask a little bit, so as to eliminate the color shift which extends a few pixels beyond the measurably overexposed parts. Note that the sky is masked out, because it is completely overexposed in the blue channel. About 1/3rd of the image is overexposed. There is good detail throughout the rest of the image. Note that there is a slight blue halo around the roofline; this is because this image is not particularly sharp, and so there is a bit of blur along edges which does not get masked out.

Photography has many trade-offs, requiring us to make choices; we neither want to overexpose, nor do we want to underexpose. Ultimately, some detail doesn't matter, and specular highlights and light sources are usually considered unimportant — it is OK to overexpose them most of the time, as we see even in our darkest example photo above. The lights aren't the obvious subject of the photo.

Now if you have large areas of color in your photo, you probably don't want to overexpose them, even if they aren't the subject. Digital cameras will often overexpose blue skies, which I think is objectionable most of the time, even if it is not the subject of the photograph.  This kind of overexposure is particularly objectionable when the sky goes from blue to cyan to white in a single image: that just doesn't look natural. See my article Three Opportunities for Overexposure. Alternatively, it is often best to strongly overexpose a background, turning it a pure color or white, rather than having a muddled partial overexposure with obvious color shifts.

Another problem with overexposure, besides color shift, is that it removes texture from the image. Areas with even one channel overexposed will appear somewhat flat. Now there are techniques which you can use to rescue such overexposed images, by generating plausible detail from the remaining channels. This is difficult to do correctly, and is time-consuming.

Now technique ought to serve the subject matter; the subject does not serve the technique except perhaps when you are creating images for teaching. Just because the blue channel of the sky in an image is overexposed does not mean that you can't end up with a terrific photograph, if the subject is worthy.

Underexposure

Defining overexposure is easy, even if we have to be careful and realize that it isn't quite as simple as we would like. Defining underexposure is far more problematic.

I defined overexposure on any given pixel as the situation where any one channel is at its maximum value, generally equal to 255 with 8 bit images. OK, we can naïvely assume that underexposure is the situation where any color channel equals 0. For example:

Saint Louis University, in Saint Louis, Missouri, USA - Saint Louis University Museum of Art at dawn - naïve underexposure mask

Not much of a mask at the bottom. This is next to worthless, just a few black dots here and there, even though there is a lot of black in this image. There are several problems with our effort here, the most significant is that there is a tremendous amount of noise, relatively speaking, that is found in the darkest part of the image, often due to the quantum fluctuation of light. Light is detected in discrete quantities due to a mysterious property of matter and energy on a small scale, and therefore is quite non-uniform.  There are also several sources of noise in the camera itself, and these sources will add to the signal, moving it away from zero. Also consider the indirect problem mentioned with overexposure: image manipulation will ‘infect’ neighboring pixels, and since no pixel value can be less than zero, this averaging will only increase the value found at pixels which ought to be zero. Noise at low levels does not average out to zero, but instead will brighten dark pixels..

Some cameras, as well as RAW converters, will do plenty of image manipulation including noise reduction or black-point cutoff, making our low-value pixels even more unreliable.

Instead I use a working definition of underexposure which masks out those values near zero. Now, should I take into account all three color channels at one time, or each color channel separately? If I choose all three, then I might not mask out a particular poor, noisy channel if the other two are good.

But if I mask out each channel separately, then I might get the situation where a particular pixel is both overexposed and underexposed! I often see this with stained glass. For a particularly brilliant red piece of glass, I may have the Red channel at 255, while the Blue channel is at 0: this indicates that the color is particularly pure and outside of the color gamut of the camera or color space used in Photoshop.

There are several methods I use to mask out dark noise.  The simplest uses the image itself and the Threshold slider; this uses the RGB luminance function shown above. I examine the image while moving Threshold, and stop when a reasonable amount of noise is eliminated.  Using this process on our darkest sample image we see:

Saint Louis University, in Saint Louis, Missouri, USA - Saint Louis University Museum of Art at dawn - underexposure mask detail

The road and the top of the building on the left shows the most noise. I adjusted the Threshold slider until much of that noise is eliminated, as you can see on this detail from the lower right hand corner of the image. You don't want to do too much of this.

I also use the same process, but doing each channel independently. This makes an exceptionally clean final image, but only if we don't have the simultaneous over and under exposure problem mentioned above.

Saint Louis University, in Saint Louis, Missouri, USA - Saint Louis University Museum of Art at dawn - three channel underexposure mask

This looks like a pretty good mask. It masks out the most underexposed parts of the image while not showing too much residual noise.

There is another technique which illustrates the problem of dark noise quite dramatically. What I do is take each channel separately, and brighten it with Curves until it no longer appears to be a photograph, but rather a line and charcoal drawing. Instead of a nice apparently continuous series of tones, we get discrete steps. This shows that we don't have enough spacing between brightness levels to produce a good image. Usually this effect is most prominent in the Blue channel because normal in-camera processing greatly amplifies its shadows.

Saint Louis University, in Saint Louis, Missouri, USA - Saint Louis University Museum of Art at dawn - line-drawing effect

You may want to click on the image to see the full size version. Some image enhancement software will work quite hard to bring out detail in shadows, but this is certainly detail we don't want to emphasize — unless of course we are going for a cartoony look. As it happens in this image, the red and green channels don't quite go to zero much and so we don't get much of the line drawing effect, but the blue channel does; so our mask eliminates considerable amounts of noise.

Creating masks like these can show you how much of your image consists of high quality pixels. I use these also for creating exposure blends. Here are those four images blended together, masking out the big color shifts and dark noise:

Saint Louis University, in Saint Louis, Missouri, USA - Saint Louis University Museum of Art at dawn

It looks pretty good, and there are hardly any color shifts except for the areas which were overexposed on the darkest base image. Notably, the color on the building and the blue sign have uniform color as needed, and we have excellent detail in the shadows. The major artifact here is the sidewalk light to the right of the stairs:  it turned off between the second and third photos and so we get a strange rendering of it here. There is also some roughness along the roofline.  You can click on the image to see the full resolution version.

Conclusion

The phenomenon of color shift — when even one channel is overexposed — severely limits quality color photography.  Of course, solid studio lighting, or supplemental lighting with fill-in reflectors are frequently used by quality photographers. Or, you can blend multiple exposures, but then the problem is finding the right algorithm or software to do this.

On the contrary, this color problem implies that quality black and white photographs ought to be easier to produce. We can introduce severe changes of contrast without worrying about color shifts.

Sunday, September 5, 2010

Imaginary and Impossible Colors

STARE AT THE TOP square for a minute or more. Do not move your head, and keep your eyes right in the middle of the square.

Slowly move your eyes to the square below.

Imaginary colors

Glorious, isn't it?

You are seeing some colors that are impossible to actually portray in the real world, other than transiently as you see here. These are called “imaginary colors”. You can't make paint that shows those colors, nor can you project a color of that light on a screen, nor show it on a computer monitor. A color meter does not measure these colors.

Here is the problem.  The human eye has basically three classes of color sensors or cone cells, one generally sensitive to the red side of the color spectrum, another sensitive to blue on the other side of the spectrum, and green in the middle, along with green-blue sensitive rod cells that work most prominently in dim lighting. There are three color sensors, three only (although there may be some people — probably females only — who have four classes, and very many males, mostly, who have less than three).

There are some deep, rich red colors which do not stimulate either your green or blue cone cells.  There are some deep, rich, dark violet colors which do not stimulate green or red.  However, there are no green colors whatsoever which do not also stimulate your red or blue cells, or even both.

A camera mimics human vision by also having three classes of sensors, and like the eye, there is no color which will give a signal in the camera's green which will not also give a signal in either red or blue or both. There will be of course reds without blue and blues without red. You can examine your own RAW photos with the excellent RAW Photo Processor, set so as to do minimal processing of the image.

Unprocessed RGB

RAW Photo Processor was set with UniWB and no color space assigned, which gives basically the actual signal received by the pixels. No green colors have a dark red or blue signal. That we mathematically represent a pure bright green color in the sRGB color system as Red=0, Green=255, and Blue=0 tells us very little as to how the eye or camera senses the color: you'll never get a green signal without significant amounts of either red or blue or both.

Human color vision as it is has the potential to see these supergreen colors, unadulterated with excess red or blue. Our experiment above shows that you can actually see these colors, if only for a brief moment. Individuals with synesthesia or severe migraine headaches can see them more often.

Apparently, when we stare at a color long enough, our eyes become ‘fatigued’ and lose sensitivity to that color. Staring at the red-blue colors leads to decreased sensitivity to them — and so we can see, ever so briefly, imaginary supergreen. However, I suspect that this mechanism is responsible for the automatic white balance of the eye. We can see gray tones correctly under a wide variety of lighting, while a camera set to a fixed white balance would not, and so the eye must have some mechanism of subtracting out the color of the light.

Since we generally have only three types of color sensors in our eyes, which have well-characterized properties of light absorption, we have the basis for creating a precise mathematical model of color: and this model will have precisely three coordinates. This is despite the intense processing that goes on in our eyes and brains; processing that is hardly known at all, despite the fact that we experience it all of the time. That it is often difficult to put our experiences into words does not mean that we ought not attempt that work.

Following is a chart which represents the full gamut of real saturated colors seen by human vision:

Cie_Chart_with_sRGB_gamut_by_spigget

Image originally from Wikipedia. Source and attribution is here.

This image approximately illustrates the full range of midtone saturated colors that can be actually reproduced by paint or by colored lights. Notice the straight line between blue and red? That shows that we can in fact get pure red and blue tones, not adulterated by any green at all, along with purple and scarlet mixtures of the two colors. Notice that the hump in the curve is in the green region, which shows that there is no physical green color which is not also a bit red or blue. If a supergreen color actually existed, then this chart would be a perfect triangle.  Full human color imagination, including the supergreen colors, very likely is a triangle — for we can predict quite accurately what kind of supercolor we will see in experiments like the one above.

The color gamut shown above is only approximate in color, because the image itself is limited to the gamut of the sRGB color system, which is itself represented by the small triangle inside of the big horseshoe. The corners of the triangle represent the primary colors used by sRGB.

sRGB is quite standard, and is used by most cameras, computer monitors, web browsers, and even High Definition Television, but it can only show about 35% of all possible physical colors, and tends to be lacking in purple, green, and cyan. By using excellent quality color filters, and a bright enough light source, you can display a much wider gamut of colors — the triangle will be bigger and fill up more of the horseshoe — and for a price you can buy a high-gamut monitor that can display more colors than the puny sRGB standard.

This particular standard was chosen by Microsoft and Hewlett-Packard because it works with even cheap computer monitors, and because it uses only 8 bits of data for each red, green, and blue color channel, which was a serious limitation back in the days of expensive computer memory. This standard gives us a large enough gamut of colors, with a small enough spacing between them to avoid banding artifacts. However, I always use 16 bits when working on my pictures — even though I have to eventually reduce them to 8 when I show them on the web. (Computers, by the way, are particularly efficient at using powers-of-two when manipulating data: so we often see 4 bits, 8 bits, 16 bits, 32 bits and so forth; always multiplying by two).

To display a color on a computer or by projection, you need at least three primary colors, and the particular colors you use, and their brightness, determine the final gamut. But notice that you have to use actual, real colors for your projector — they have to be within the horseshoe — and so there will always be colors that cannot be represented. If you want more gamut you eventually will have to add more colors, which is precisely what we see with high quality color printing. This is impractical with monitors, however, which are usually limited to just three primary colors.

In a sense, the three primary colors are a bit arbitrary, and artists have used a variety of primary color systems in their theory. However, some primary color systems are better than others because they have a larger color gamut, or can represent a larger variety of basic colors. Undoubtably, the bottom of the horseshoe is rather pristine, so I would expect that most any color system ought to attempt to get as close to the bottom corners as possible: the open question is to which third color to use. Do you want good greens or good cyans? You can't have both if you use just three colors.

Note that painters use subtractive colors, so their primary colors out of necessity will have to be the opposite of what is shown here: cyan, magenta, and yellow rather than red, green, and blue. In particular, the painters' primary color palette will be especially deficient of good blues, greens, and reds. This is why some pigment colors are highly prized by artists, since they have colors that are otherwise unmixable. In the ancient Mediterranean, the most costly dyes came from various species of Murex snails, and the colors produced were down at the bottom of the horseshoe chart — with the purple or scarlet colors being used for Imperial dress, and the blue used for the fringes of Jewish prayer shawls. These colors are decidedly non-mixable — you have to obtain a pure color and cannot obtain them by mixing other colors.

If you want to represent the entire gamut of colors mathematically, using only three numbers, then you have to go outside of the bounds of the horseshoe. But then some combinations will give colors that cannot be represented by any paint or filter. But remember that a supergreen color can actually be experienced under some circumstances. However, there are some wide-gamut color systems which represent colors that cannot exist even in our imaginations, like a scarlet black or deep blue white.  As far as I know a true wide-gamut color system that includes supergreen as one of its primaries does not exist, but it would be useful if it did, since it would closely represent the entire human visual system, including our imagination, while excluding those colors which are impossible to even imagine.

Here are the most commonly used color systems:
  • sRGB:  35% of entire gamut
  • ColorMatch RGB: a bit larger than sRGB, slightly different primaries
  • Adobe RGB:  50.6%
  • Wide-Gamut RGB:  77.6%
  • CMYK: smaller than sRGB, but not complete overlap.
  • ProPhoto RGB: most of the visible color gamut, 13% of the colors are imaginary or impossible.
  • L*a*b* colorspace: 100% of visible colors, with lots of impossible colors.
If you are a digital photographer, you have to choose a color system, and the question becomes which system to use.  Now sRGB is used everywhere, and is often the only color system that will look good on output: most devices just can't do much better than sRGB, and many output devices assume that sRGB is used.  If you output a wide-gamut file on an sRGB device, its colors will be muted, giving you precisely the opposite effect you desired.

Often I hear the advice that photographers ought to set their cameras to Adobe RGB, and just as often I hear photographers complain that their photos look washed out and unsaturated because they did use Adobe RGB but didn't know how to manage it, and so I recommend using sRGB and nothing else, even though it isn't the ‘best’.

When I do have need of producing colors outside of RGB, for example when preparing images for commercial four color print, then I will use a larger gamut color system, and eventually work directly in CMYK. If you are outputting to a broad-gamut color printer that uses more than four inks, then I'd use a high-gamut color system and load the color profile of the printer into Photoshop, keeping an eye on the gamut warning feature. If you are outputting images to the web, then use sRGB.

In the philosophy of logic, we say that a statement is true if it corresponds with being, with something that actually exists in the entire realm of being. But we say that a statement is meaningful if it does not encompass a logical contradiction — a square circle is not meaningful. If you can imagine something that does not actually exist — see it with your mind's eye — then your imagination still has meaning, even if it doesn't have truth. For example, you can imagine Doberman pincers with wings; these don't actually exist in our world, but you can imagine them without contradiction. Likewise with supergreen colors, which are meaningful even if you can't have a paint of that color.