Monday, November 29, 2010

Color Spaces, Part 2: CMYK

THREE NUMBERS SUFFICE. If you desire to mathematically describe or represent any color seeable by the human eye, the simplest and most well-ordered models will include exactly three numbers, no more and no less.

But please notice that I wrote that three numbers suffice. Normally we think of color theory in terms of mixing colors: for example, computer monitors typically have three kinds of dots, each a certain precise shade of either red, green, or blue. Various mixtures of these color dots at various intensities will produce all the shades of color viewable on the screen, from dark gray or black, to white, and with a rainbow of colors throughout. Alas, although we can accurately characterize every known color by three numbers, we cannot mix all known shades with three primary colors. Three colors do not suffice, and this is the color gamut problem. If you choose three colors for your primaries, then no matter which colors you chose, there will still be colors that you are unable to mix.

For more information, see my article on imaginary and impossible colors.

For reasons of cost and practicality, most color devices use just three colors, and these provide a limited gamut of colors. Most computer monitors and High-Definition televisions use the sRGB color gamut, which can display about 35% of possible colors. Expensive high-gamut monitors can approach 50% of possible colors. Generally missing in these color output devices are rare colors such as scarlet and Imperial purple. Good cyans and some greens are also missing, but the system is generally adequate for most uses.

In the RGB color system, red, green, and blue lights are mixed together to provide a wide range of colors and shades.  But we cannot use only red, green, and blue inks on a page to produce a similar range of colors. See my article, Color Spaces, Part 1: RGB, for examples of how these additive colors work together: for example, if you shine a red and green light together, you will get a bright yellow color, but if you mix red and green paints together, you will get a dark muddy mess. You cannot get a bright color by mixing RGB inks. Mixing saturated colored lights together will always produce a brighter color; mixing saturated colored paints together will always produce a darker color. So when we put ink to paper we have to use a subtractive color system, which chooses pure light primary colors for mixing.

Recall the discussion in the RGB article about the opponent color relationships. These are opposite color pairs, which produce shades of gray when you mix them, and not a unique color.
Red is opponent to cyan
Green is opponent to magenta
Blue is opponent to yellow
And since we are working with ink on paper, I might add:
White is opponent to black
The three primary colors in the RGB or additive color system are red, green, and blue, while the three primary colors in the CMY or subtractive color system are cyan, magenta, and yellow. RGB and CMY are therefore opponent to each other.

Consider the following image:

Broemmelsiek Park, in Saint Charles County, Missouri, USA - red berries against blue sky

We have red berries against a blue sky. In the RGB system, the red berries will be bright in the red channel, and dark in the green and blue channel, since pure reds have little to no green or blue in them. If we examine the color channels separately, we see this:

Red berries - RGB

Red berries are almost white here, because in the RGB color system white is a strong color, while black means the absence of a particular color. Since the berries are nearly pure red, they are white in the red channel, and black in the other channels. Since we have a nice blue sky, the sky is suitably lightest in the blue channel; and since midday blue skies tend towards cyan and not magenta, the green channel is brighter than the red.

The CMY color system works a bit differently. White indicates an absence of ink, while black means that a particular ink has 100% coverage. So a part of an image where all three channels are white means that no ink is put on the page, and so the white color of the page shows through. The same image in the CMY color system is this:

Red berries - CMY

Red berries have no cyan color in them, and so are white in the cyan channel. Magenta and yellow ink mixed together makes red, so the berries are dark in both those channels.  Likewise, blue skies have little or no yellow in them, and so the sky in the yellow channel is light, and is dark in the cyan channel, meaning there is lots of cyan ink there. Cyan plus magenta equals blue, and since our cyan channel is darker than the magenta, the blue sky will properly be a greenish blue shade and not purple.

Please note that the RGB and CMY channels look nearly identical. Working in the CMY color system is hardly different than working in the RGB color system because of the opponent colors used. Please note that the channels are not identical because the printing and television industries use slightly different color standards. But they are close.

Recall the discussion above about limited color gamuts, and how three primary colors cannot produce the full gamut of colors visible to the human eye. Computer monitors really have it easy, since they have powerful back-lighting which can produce bright colors much brighter than the artificial illumination typically found indoors. But poor printed pages do not have that advantage: the brightest tone available will always be the paper itself, and that paper will be duller than the room lighting. And so, printed output will generally have a poor color gamut.

But we can expand the color gamut if we add more colors of ink. Full-color printing always adds at least one additional color to expand the gamut, and in commercial printing, that color is black:

Color space example - CMYK

The standard cyan, magenta, and yellow inks used in the printing industry really don't mix together well to make a good black, rather they look muddy. Another problem is that commercial printers have what is called an ink limit: some presses just can't have too much ink on the page without causing problems, and so printers will insist on a limit to the total ink coverage on any given spot on the page. Some shoddy printing may even have an ink limit of 240%, which means that you can't mix together full coverage of our three colored inks, since that would gives us a 300% coverage, which is over the ink limit. 100% black ink will replace 300% colored ink, which is quite a savings, and the black ink looks much better than an equal mixture of colors. Adding black expands the color gamut of the printed page, and does it while ensuring a cleaner press run with less chance of smudging ink.

Here is our image in CMYK (where 'K' means 'key' or black):

Red berries - CMYK

The twig is dark brown, and much of its tone now comes from the black channel, as do the shadows. See also how Photoshop removed ink from the color channels: shadows there are now a medium gray.

The CMYK color gamut is considerably smaller than the sRGB gamut most often used in the computer industry and by digital cameras. However, the CMYK gamut is not completely contained within sRGB: printing can produce better cyan, magenta, and yellow, whereas sRGB produces better red, green, and blue.

We can expand our color gamut by adding colored ink. In the printing industry, these are called spot colors. If you don't think that the standard color mixtures are good enough, you can pay the printer to add spot colors. Be aware that this can be quite expensive, and is typically only used for the finest work. Were I to use an accurate spot color for the red berries, they would then become white in the CMY channels — since most of the color would be transferred to the new spot channel.

Cheap computer printers use the CMYK color system. Quality computer printers will have more than three colors, and there are some models that use ten colors. But if you use a desktop color printer to output photographs, be aware that you will be paying several dollars per page for the ink alone. Your costs may be fifty times higher than what a commercial printer charges in bulk.

For a further discussion of CMYK, click here for part 2.
If you think you understand CMYK, then take A CMYK Quiz.
For an overview of the RGB color system: Color Spaces, Part 1: RGB
For color spaces more natural to artists, see Color Spaces, Part 3: HSB and HSL.
For a color space designed to be visually uniform, see Color Spaces, Part 4: Lab.

Thursday, November 25, 2010

Quick Tips for Food Photography

  1. Shoot quickly — food fresh out of the oven or refrigerator looks better.
  2. Use natural sky lighting. Food often lacks definition, so a small or fairly distant window can produce good shading. Generally, you want the light to provide sharp, well-defined shadows to enhance the texture of the food. You may have to use fill-in reflectors, otherwise color and texture will be lost if large areas of shadows are too dark. Aim for a 1-to-2 E.V. range between large lit and shadowed areas: of course, dark shadows under a plate for example are completely acceptable, just not on the main parts of the food itself.
  3. Avoid using the camera's own flash. Avoid mixing natural and artificial lighting, unless both have a close color balance. Authorities in food photography state that it is difficult to use artificially lighting well:  when they do use it, they prefer small, distant light sources to provide sharper shadows.
  4. Set your exposure and post-processing so that you get good highlights on the food: having a full exposure range will also bring out the colors of the food (in Photoshop, using Levels or Curves in RGB mode will enhance color). Be sure that you don't overexpose too large of areas because you might get muddy color shifts. It is OK to overexpose specular highlights.
  5. The color of food is very important to make it look appetizing. Be sure to do a good color balance. Contemporary food photograph seems to prefer a slightly cool color balance, while traditional food photography preferred slightly warm: both look good, as long as the color balance is close to neutral.
  6. Use props to good effect, such as tablecloths, utensils, glasses, napkins and shakers. But be aware that the food itself is the main subject and shouldn't be overwhelmed with secondary items.
  7. Contemporary food photography uses very shallow depth of field, and prefers lenses with excellent bokeh or background blur. This is tricky to do right, for you have to judge the correct focus point. While I think this effect is attractive, perhaps it is a bit overdone. Some use tilt/shift lenses — or even bellows cameras with these motions — in order to precisely control the plane of focus.
  8. Food photography is essentially still-life photography. There is an immense body of work in still-life, particularly with painting. Do some research and use still-life theory to good effect.
  9. Your image may not look like you remember seeing it, due to the dim-light adaptation of the human eye. In particular, your image and texture may look a bit flat. In this situation, food photos may benefit from having the blue color channel blended into the image to give greater contrast to specific colors. See my articles on the Purkinje Correction.
  10. Get low. Typically, we look down on food at about a 45 degree angle; this might not be best for getting a good shot. Get a bit lower.
  11.  Check your background. Be sure it doesn't detract from the food, which is your main subject. Classical still life preferred a black background, while contemporary food photography likes a white or pastel background, completely out of focus. You don't want the eye to be distracted by the background in most cases. Alternatively, your photo may only show the table top.
  12. Food benefits from extreme lens sharpness. Macro lenses are particularly prized for this sort of work. Use a sturdy tripod and focus carefully. In post processing, use good techniques to preserve and enhance sharpness.
  13. To give a good perspective, most food photographers use a slight telephoto lens for this work, and set their camera several feet away from the food, and six feet would be better. If you have a stylist, be sure there is plenty of room for working between the camera and the subject.  The wider angle the lens, the more area you have to control for your photo: but some have employed wide angles and great depth of field to portray an entire kitchen along with the food.
  14. Food styling — that is, preparing the food itself to look good in photography — is a an advanced specialty, and can be quite involved. I recommend the book Food Styling: The Art of Preparing Food for the Camera by Delores Custer.
  15. My food photography can be seen in the book Thursday Night Pizza, by Fr. Dominic Garramone. Click here to see larger photos of the pizzas: these photos were taken from directly above with no photo styling, per instructions from the publisher. Otherwise I used natural sky lighting, reflectors, and accurate white balance. I used an antique Nikkor 55mm f/3.5 Micro lens for sharpness, with the camera being located about six feet above the pizzas.

Wednesday, November 10, 2010

Photoshop Wishlist #1

I AM CURRENTLY evaluating Adobe Photoshop CS5 on my computer, and have 18 days left until the trial copy expires. For the most part, I am delighted by the product, and see many improvements over my old CS3 version. It does not require that much additional computer power — and sometimes it uses even less, since it uses the graphics processor and memory to do tasks once reserved for the main processor.

Photoshop is a venerable, highly developed and nuanced product, and like any complex, actively developed system that's been around for a long time, has many features which see little use nowadays, as well as the refinement to be able to do important things very, very well.

However, a highly developed system may find it difficult to adapt to new conditions, having been optimized for previous conditions. Photoshop has its roots as a raster image processor primarily for graphics arts professionals, and is well-known as a good platform for doing digital art, with its excellent support of many paintbrush-like tools for creating images from scratch. But it is also used in photography, as its name suggests. I am beginning to see some limitations of its photographic capabilities, and one major limit is that images are always strictly bound to an output medium.

For most Photoshop users, this limit means that you edit your images in the sRGB color space, with eight bits per color channel. That isn't too bad, and this is an obvious approach for 90% of all users: after all, that is the standard format used by most cameras and Internet web browsers. Certainly you would want to edit a file in the format which the camera delivers and what your computer can display. Photoshop does things the way it ought to be — right?

I see some problems with this. Each color channel has a maximum value of 255, a minimum value of 0, and we can use only integer steps between: 1, 2, 3, and so forth, with no intermediate values. This lack of precision is of little consequence to most users, and if you do need greater precision — for example, if you are applying severe curves to your image — then certainly you can use 16 bit mode (as I do) to increase the number of possible values. This extra precision helps avoid digital processing artifacts such as banding, and also lets you get better shadow detail.

CS5 has a great improvement over CS3 in that it allows far more operations on 32 bit images, giving us great precision in image manipulation; I haven't tried it yet, but look forward to experimenting with it.

But that isn't good enough. I'd like to see fractional RGB numbers. I want RGB values greater than 255.  I want negative RGB numbers. But this is madness! You cannot display an image with RGB values greater than 255! And what on earth are negative RGB values? Those are clearly impossible, there is no such thing as negative light!

But remember that I stated that in Photoshop images are always bound to a specific output medium, which for most photographer users is probably 8 bit sRGB. While clearly I do eventually want an 8 bit sRGB image, while I work on processing an image, there may be times when my intermediate files will be out of that gamut. And I do process my images mainly in the wide ProPhoto gamut — or in the ultra-wide L*a*b colorspace — with 16 bits per channel to overcome the limits of sRGB, at least temporarily.

Do not think of processing images as a step-by-step process, where each increment produces a superior image.  Sometimes you have to make an image look worse before you can make it look better. I propose making images so bad that they are impossible to print, or even view accurately on your computer monitor — at least temporarily.

For example, when I apply a severe curve to an image, anything that ought to go over 255 is set to 255, and so we lose information and image detail. However, if its value ought to be 300, I want it to be 300, even though it is out of the gamut for the time being.  If I tell Photoshop to make an image twice as bright, I want the entire image to be twice as bright, without worrying about losing highlight detail. I will deal with the gamut when I need to deal with it, which is when I'm preparing the final image for print or web display.

I often add together multiple images to make a final image. What I have to do is apply an opacity to each layer (which is like doing division) to get my final result, but certainly there must be severe rounding errors, and we are losing tremendous amounts of detail in the shadows as a result, which is a bad thing, and especially since digital photography is known for often having terrible shadows. What I would like to do is be able to add together images with impunity. Image addition, which is called Linear Burn in Photoshop, has a maximum value of 255, but if the final value of all this addition ought to be 500, that is what I would like to see.

Generally speaking, I would like to see in Photoshop a pure kind of image algebra, where we can do all sorts of operations on images in a way that follows the standard rules of arithmetic, such as add, subtract, multiply, and divide, as well as other more obscure operations such as exponentials. To do this accurately, we can't have the hard cutoffs of 0 and 255, nor should be we limited to mere integers.

This brings us to negative RGB numbers. These in fact can represent real colors. For example, if you work in a narrow-gamut color space similar to sRGB, and you want to represent a real color outside of its gamut, you can mathematically represent this if you are willing to allow at least one RGB number which is negative or greater than 255. So a negative RGB does not mean negative light, but rather that it is merely an out-of-gamut condition. If we are allowed to use negative numbers — and numbers greater than 255 —  then we will be able to represent all colors while still using a system that is otherwise identical to our narrow-gamut color system. This system will remain relative to a particular gamut, while not being limited to that gamut.

This has many benefits to a careful Photoshop user. If you work in the ProPhoto or Adobe RGB color spaces, and I know many people do, how then do you know that a particular color is out of sRGB gamut? Certainly you can turn on the Gamut Warning feature (I use it all the time), but how can you create a mask for this sort of thing? Can you tell, just by looking at an RGB value, that it is out of gamut? By using large and negative numbers, we can then precisely identify what is out of gamut simply by the numbers: is it greater than 255 or less than 0?

I often attempt to brighten shadows, and try to add lots of local contrast so that dark areas of an image still appear to be dark to the eye, yet in fact are not all that dark, and instead show lots of detail. This is often impossible to do well due to the arithmetical rounding errors found in low RGB values, which is a contributing factor to noise.  Ideally, a numerical representation of RGB would give equal precision to all levels of perceived brightness, but that is not what we currently have, as can be seen in the illustration below:

number of colors by brightness

Most of our current systems of numerically representing color are biased towards midtones, particularly saturated green and magenta tones, while offering a paucity of dark and bright colors. This gives us the risk of banding in our final image.  Having fractional RGB numbers would alleviate this problem greatly, and though we can use 16 bit images, having fractional values would give us a better guarantee of processing shadow values — and highly saturated dark colors — to avoid rounding errors.  I've noticed that 8 bit sRGB in particular handles navy blue rather poorly, which is a pity, for that is my favorite color. We always risk banding when we have large areas of dark blue, as is often found in brilliant deep blue winter skies, especially when using a polarizing filter. We see the same problem with bright yellow colors.

If you take an 8 bit image and convert it to 16 bit, Photoshop multiplies the RGB values so that they fill the new numerical representation. So a value of 255 will be converted to 32768, which is the maximum 16 bit number. In the 32 bit system, which uses floating point numbers, 255 is converted to 1.0, which is the maximum value allowed in that system: all smaller RGB values are some fraction less than 1.

Instead, I propose an alternative method. When you convert an 8 bit image to this new system, all values remain unchanged. The difference is that your values can, after processing, be greater than 255, less than 0, or some fractional number. With this system, you can be very careful, and never allow your image to go out of gamut, or you can edit to your heart's content and worry about gamut later. If you edit an sRGB image in the sRGB color space, your image may want to go out of gamut and you will never know it, except that detail will disappear.

There are a few problems with my system.  First, you can't see the extra colors if you don't have a wide gamut monitor, but we already see this problem when working in the Adobe RGB or L*a*b color space. The other problem comes when we want to convert a high-precision image back down to 8 bits.

The key to working with images in any gamut is to do by-the-numbers processing, and have a thorough understanding of the channel structure of the images. Instead of merely determining if an image looks OK on your screen, you instead measure an image to be sure the colors are right. Calibrating your images is more important than calibrating your monitor.

Converting an image back down to an output format like 8 bit sRGB is more problematic, but take a look at Photoshop's own conversion options from 32 bit images.

However, doing something like this may not work well within the Photoshop product, as it would require a major redesign of many features. However, I do think that it would be quite useful for accurate image processing.

Tuesday, October 26, 2010

Black and White

OCCASIONALLY YOU SEE, on the dpreview.com forums, a posting questioning the use of black and white in contemporary photography. The critic — almost always apparently an educated, brash young man — will declare black and white photography obsolete, for it merely was a product of historical forces, ignorance, and technological compromises, and so it has no relevance to us today; he states that black and white photography is something that ought to be abandoned and forgotten.

This is of course the error of historicism, which in its extreme view denies any universal laws or truths. An opposite error idealizes all situations according to a simplistic theory, and ignores the inherent messiness of life. Most of us bounce back and forth between these two extremes, but rather let's find the virtuous middle and attempt to find out what black and white photography is about.

I can see various reasons for either shooting black and white film or doing digital black and white conversions.
  • Just because you like it.
  • Cost, convenience, or necessity.
  • Nostalgia. 
  • Technical advantages. 
  • For aesthetics or mood. 
So why should we still produce black and white photography? Let's consider these individually.

Just because you like it

OK, why do you like black and white photography? Contemplate the reasons why you find it appealing. Perhaps it is some combination of the following?

Cost, convenience, or necessity

Suppose your photograph will be printed in a newspaper, bulletin, flyer, or other inexpensive black and white medium. You may prefer your photo being printed in full color, but since that is not going to happen, you do the best you can despite this limitation.

Saint Louis University, in Saint Louis, Missouri, USA - Museum of Contemporary Religious Art at dusk (black and white)
Museum of Contemporary Religious Art, at Saint Louis University. I needed to convert my image, originally in color, to black and white for inclusion in the book Saint Louis University: A Concise History

If you shoot film, and have your own darkroom or film scanner, black and white photography remains rather inexpensive since you are able to develop and print your own quality photos. If you live in a remote area, this may actually be the most convenient solution also. Also, superb quality film cameras are available at low cost. While you can process your own color film, this is rather more expensive and difficult compared to black and white film.

Sometimes, the lighting conditions are so poor that a black and white conversion is the fastest and easiest way to produce a quality images. I will sometimes convert an image taken under sodium vapor lights to black and white, because the color of that lighting is usually unpleasant and detracts from the beauty of the image. I very often do a conversion when I use extremely high ISO or severe curves on an image, both of which produce intense noise.

Nostalgia

Do you pine for a time when style of dress and manners were better? Times that were happier, even if more difficult? Do you feel a twinge of romance when viewing those older things? Then perhaps you like the nostalgic look of black and white photography.

[Portrait of Doris Day, Aquarium, New York, N.Y., ca. July 1946] (LOC)
Doris Day, singer and actress, ca. July 1946, New York City. Photograph by William P. Gottlieb.

I must admit to being a bit undecided if this kind of nostalgia is desirable or not: on one hand, this kind of nostalgia is pleasant.  But ought we not prepare for the future, where we are inevitably headed? Or rather, ought we live our life in the present, the only time we can truly see? On the other hand, escaping from the drudgery of the present by an imaginative look in the past is sometimes necessary.

We must not fall into the trap of believing the doctrine of inevitable progress, the idea that things are always getting better and better. And likewise, we must not distrust those who prefer older things; for they may not be reactionaries, but rather they might be correct. The theory of evolution implies eternal betterness, but in reality, for every advancement there are multitudes of fatal mistakes. So what we call nostalgia may very well be a rational attraction to things that were in fact better in some way.

Technical advantages

We must be humble enough to realize that older technologies might actually be better in many ways. A large format camera, with quality black and white film, expertly exposed and processed, will have a range of tones and detail that far exceeds any DSLR snapshot. I do use digital photography exclusively, since it is so convenient, but there are trade-offs.

One of the great advantages of black and white photography is the wide contrast range possible. Often in color, it is difficult getting a full range of tones from pure black to white, since your brightest significant detail may be a saturated color — you can't brighten it without losing saturation. This is particularly troublesome if your brightest color is a pure blue -- you just don't have that much room for other tones unless you do severe edits to the image. Blue skies are often a problem: you can't brighten the foreground without risking overexposure of the sky (which will damage the sky color), which is one reason why polarizing filters are so useful.

Holy Family Log Church, in Cahokia, Illinois, USA - exterior at dusk 9 (black and white)
I inadvertently overexposed the sky on this image, turning it into an implausible shade of cyan: but it looks fine when de-colorized. This is Holy Family log church, in Cahokia, Illinois.

With black and white images, you only have to worry about over- or under-exposing one tone: white or black. With color images, you need to worry about three color channels, any one of which may be poorly exposed, harming the final image. With color, we have a far smaller dynamic range, which is why color images benefit from fairly flat lighting.  On the contrary, the masters of black and white photography use the increased dynamic range to excellent effect.

View of Gateway Arch from Laclede's Landing - original color
I took this photo for a book on the Gateway Arch. This is a merge of numerous exposures, and the camera was set to automatic white balance. This is a terrible image in several ways, and the yellow sodium vapor lighting is particularly objectionable.

High efficiency electric lighting often has poor color; fluorescent lights are quite bad, due to the unattractive and broad range of green-to-magenta tones found. Sodium vapor lighting, with its narrow yellow-orange color leads to extremely poor color photos. In these cases, a black and white image may be superior.

View of Gateway Arch from Laclede's Landing - black and white
The same series of images, but I converted them to black and white before blending. I did some additional processing on this image, such as applying curves and sharpening. In my opinion, this isn't an image I'd particularly want to see in print, but I do think it is an improvement.

Digital cameras have a linear sensor that respond to light such that twice the brightness registers as twice the signal. Unfortunately, this means that most of the sensor's data is clustered around the very brightest of objects, and there is always a great risk of losing detail through overexposure. This also means that most of the tonal scale will be represented with very little data, which leads to lack of detail and noise in the shadows. So the general advice for digital is that you expose for the highlights and post-process to improve the shadows. Black and white film technology, on the contrary, is known for having great shadow detail — you expose for the shadows and post-process to improve the highlights, and unlike digital, it doesn't have a hard cutoff at the ends of the tonal range. Black and white film is traditionally very good for photography in dim, highly contrasty lighting and is used to good effect in film noir.

Grant's Trail and Gravois Creek Conservation Area, in Saint Louis County, Missouri, USA - unprocessed forest scene
A forest scene, at Gravois Trail, in Saint Louis County, Missouri. This hand-held photo is underexposed, and was shot at ISO 3200. There is hardly any visible detail, and brightening the image would reveal extreme color noise in the shadows.

Grant's Trail and Gravois Creek Conservation Area, in Saint Louis County, Missouri, USA - forest scene converted to black and white
The same image, converted to black and white — I discarded most of the red and blue channels. I brightened the image greatly, and applied some noise reduction and sharpening.

Digital noise is most evident in the shadows, and color digital noise is usually ugly and highly undesirable. On the contrary, black and white noise is far less objectionable, and can even improve an image, giving an impression of texture and sharpness. This is often an advantage when shooting at very high ISO, or when brightening a severely underexposed image: a terrible color image can often be dramatically improved by converting it to black and white.

For aesthetics or mood

While nostalgia seeks the better things from the past, and black and white photography may evoke that nostalgia, we must always remember that reality in all ages past is high-resolution, wide-gamut, high-dynamic range color. Would a master photographer of a bygone era have used color photography if it were technologically feasible? Was his mastery of the black and white medium merely making the best of an unsatisfactory situation? Undoubtably for many, although this is speculation. We do in fact know that color technology was eventually widely adopted, and also that black and white never went away.

Color is an important factor in beauty. Bright colors are pleasing, and there are many studies and theories of the psychology of color which assigns good, desirable effects to the various colors. However, the color black is nothingness, the color white can be blinding, and gray is dreary: black and white photography necessarily is less cheerful and pleasant than color. Since black and white is more abstract than color, it can also suggest mystery.

Statue in cemetery - heavily processed
A statue in a cemetery - heavily processed to imply a bleak mood.

So contemporary photographers can use the dreary aesthetics of black and white to evoke a mood of bleakness, despair, and ugliness. This can invoke a kind of anti-nostalgia, seeing not the good in the past, but rather its ugliness, and so black and white photography can be used in a mocking, disparaging fashion. It can also be used with fantasy, where the dull everyday world is seen in black and white, while the fantasy world is in color.

Some conversion hints

When shooting or converting an image to black and white, it is usually essential to adjust the image to give you the full range of tones, otherwise the image may look flat. Good global contrast is essential for a good black and white image. Adjusting curves of color images is a perilous activity: you can have color shifts, oversaturation, and you can send an image out of gamut; these are hardly concerns with black and white images.

Gothic Ornament 2, McMillan Hall, Washington University, in Saint Louis, Missouri, USA - black and white comparison
Gothic-style ornament at Washington University in Saint Louis. The straight-forward conversion on the left lacks contrast, which is corrected on the right.

The second important consideration is the conversion of colors to gray tones; there are many ways to do this in Photoshop, and there are some excellent plug-ins that improve the process. Even though you lose color information in your conversion, the various gray tones ought to imply different shades in the final image.

Equal-Lightness colors converted to grayscale
The color image on the left was converted in Photoshop using Image->Mode->Grayscale. This is obviously a fabricated image, since I specifically chose all colors to have the same luminance. Photoshop has many ways to convert to black and white, some may be better than others at implying changes in tone.

Generally speaking, you want to select the parts or combinations of each RGB channels which show good contrast between different objects. If the subject has stripes, you probably will want a conversion that shows the stripes in a good manner -- some conversions may not show the stripes at all. Also, for faces your conversion will have a drastic effect on showing or hiding blemishes and wrinkles. In Photoshop, the Black and White tool is very good for doing this conversion, however, a thorough knowledge of the channel structure of images and the laws of color mixing is very helpful for this.

Finally, you can add far more local contrast to a black and white image compared to color, while still making it look plausible. This final step has been used to great effect by the masters of the medium.

Monday, September 20, 2010

Over and Under Exposure

GETTING EXPOSURE RIGHT is one of the challenges — and annoyances — of photography. I had long experience with black and white film photography and so I thought I had a pretty good understanding of exposure and how to get a decently-exposed image. When I got into digital photography back in 2001, I was quite disappointed with the results — the automatic exposure was often very wrong among other problems — and I had the bad opinion that it was the camera's job to set itself properly. You can read more of this on my old posting A Camera Diary.

Besides thinking that good photography merely involves choosing the ‘best’ camera, I was quite naïve about the properties of color digital images, and how they differ from black and white film. Exposure is far more critical to color photography relative to black and white.

Please consider the following series of images, taken at ISO 200, f/8, with each exposure time varying from 1/8th for the darkest, to 8 seconds for the brightest. This Beaux-Arts building was built in 1900 for the Saint Louis Club, later became the headquarters for the Woolworth's company, and now houses the Saint Louis University Museum of Art.

Saint Louis University, in Saint Louis, Missouri, USA - Saint Louis University Museum of Art at dawn - side-by-side composite of 4 exposures

Which image is exposed the best? Certainly exposure is something of a matter of taste, and your particular monitor settings may make one look better than another, and you might change your opinion if you used a different computer or if you printed these. However, too much exposure will give you all white, and too little exposure will give you black, and then you no longer have an image of a building. Objectively speaking, you have to expose within a specific range, which will vary depending on subject matter, your camera, and your post-processing.

If I had to choose between these four images, I'd select either the upper right hand image, or the lower left hand one; although I think that an intermediate exposure between these two would have been better. I took this in the morning, and perhaps I ought to have waited a few minutes for the sky to get brighter, which would have given a better balance of light over the entire image.

OK, I might say that I'll choose whatever image looks best to me; of course you do have to do that. Just because a machine says that one image is better than another doesn't mean that we have to follow that advice, because photographs are intended for humans, not machines. Just because the camera says that the photograph is correctly exposed doesn't mean that it will look best to us. But limiting ourselves to just gut instinct can't be right: “The unexamined life is not worth living for a human being” wrote Plato in the Apology. Instead, we ought to ask some questions. Why does one image look better than another? How can we reliably and predictably make good images?

Just because an image appears to be a bit dark does not mean that it is bad — there is a lot of detail that can be pulled up from the shadows. Generally, overexposure is more of a problem with digital images than underexposure, and so the standard advice is to expose for the highlights and process for the shadows. By the way, this is opposite to the advice used for shooting film negatives, where you generally have to expose to get good shadow detail.

Let's pull up some shadow detail from the upper right hand image:

Saint Louis University, in Saint Louis, Missouri, USA - Saint Louis University Museum of Art at dawn - lightened shadows

I think this is an adequate image. Lots of detail is normally lost in the shadows, and it is easy to make this detail visible. This is just a rough brightening, and there are lots of techniques to show good detail in shadows. Were I doing a better job, I'd add more local contrast in the shadows; these shadows look a bit flat.

My intention when taking these photos was to produce a series of images which I would later blend together to make decent single image with lots of highlight and shadow detail with little noise and good color rendition. Before I submit the images to my exposure-blending software, I create hard exposure masks which cut off those parts of the images which are over- and under-exposed; the end result is a nicer looking image with low noise and good color tone. Without these masks, the software produces unpleasant color shifts in the highlights and excessive noise in the shadows. Masking also reduces haloing artifacts generated by my software.

Overexposure

It would be helpful if we define our terms. A pixel in an image is overexposed if any one of the three color channels is at its maximum value; or for eight bit images, if any channel is equal to 255. Now 255 might just happen to be the correct value of a pixel, but that is unlikely, since everything brighter will be equal to 255 also. If any one of the three color channels clips due to overexposure, then you will get color shifts in the final image.

This color shift is rather prominent on the brightest of my sample images.  Here is a close up view; note how the color of the building near the light goes from a nice orange color to yellow, and then white; while the blue sign goes to cyan then white:

Saint Louis University, in Saint Louis, Missouri, USA - Saint Louis University Museum of Art at dawn - overexposure masks

In the upper right hand corner, I put in black wherever any one of the three color channels of a pixel is equal to 255. In the lower left hand corner is a mask which shows wherever the RGB luminosity goes to 255; notice how it masks out a smaller area than the full overexposure mask.

RGB luminosity is roughly defined as:
30% Red + 59% Green + 11% Blue
This approximates the sensitivity our eyes have to each primary color.  But this value will often be less than 255, even if one of the channels is overexposed. Some camera histograms will show this value instead of three individual color histograms, which can be less than helpful. Also, some exposure blending and tone-mapping methods use this value as an estimate of brightness, and the final images often show these color shifts.

In the lower right hand corner I superimposed the full overexposure mask on the image. Note that it covers up nearly all of the areas that show an obvious color shift, but not all.  There appears to be some bad color bleeding out from around the edges of the mask.

This image, even though it comes from a Camera RAW file, still has been highly processed by the RAW converter, plus I did some lens distortion correction as well as straightening of the image. My camera uses a matrix of light sensors, and each one is sensitive to only one color.  When the RAW converter makes the final image, it estimates the missing colors at each pixel by examining neighboring pixels. Likewise, when correcting for lens distortion and camera tilt, Photoshop estimates the correct pixel values by also examining neighboring values.  So we are always doing some averaging; but consider this example equation:
Estimated value = (250+240+235+garbage)/4 = garbage
So the effects of overexposure anywhere in an image will spread a bit to neighboring pixels. In practice, when I make a mask like this, I will mask out everything that has a value over 250 or so, which seems to get rid of most if not all of these nominally good, but actually bad pixels, without losing too much highlight detail. Someday I'd like to see software that offers a mask channel associated with images, which will show all pixels which are overexposed, or which are indirectly unreliable due to processing.

Here is the brightest image, with a black overexposure mask superimposed on it:

Saint Louis University, in Saint Louis, Missouri, USA - Saint Louis University Museum of Art at dawn - extended overexposure mask

I extended the mask a little bit, so as to eliminate the color shift which extends a few pixels beyond the measurably overexposed parts. Note that the sky is masked out, because it is completely overexposed in the blue channel. About 1/3rd of the image is overexposed. There is good detail throughout the rest of the image. Note that there is a slight blue halo around the roofline; this is because this image is not particularly sharp, and so there is a bit of blur along edges which does not get masked out.

Photography has many trade-offs, requiring us to make choices; we neither want to overexpose, nor do we want to underexpose. Ultimately, some detail doesn't matter, and specular highlights and light sources are usually considered unimportant — it is OK to overexpose them most of the time, as we see even in our darkest example photo above. The lights aren't the obvious subject of the photo.

Now if you have large areas of color in your photo, you probably don't want to overexpose them, even if they aren't the subject. Digital cameras will often overexpose blue skies, which I think is objectionable most of the time, even if it is not the subject of the photograph.  This kind of overexposure is particularly objectionable when the sky goes from blue to cyan to white in a single image: that just doesn't look natural. See my article Three Opportunities for Overexposure. Alternatively, it is often best to strongly overexpose a background, turning it a pure color or white, rather than having a muddled partial overexposure with obvious color shifts.

Another problem with overexposure, besides color shift, is that it removes texture from the image. Areas with even one channel overexposed will appear somewhat flat. Now there are techniques which you can use to rescue such overexposed images, by generating plausible detail from the remaining channels. This is difficult to do correctly, and is time-consuming.

Now technique ought to serve the subject matter; the subject does not serve the technique except perhaps when you are creating images for teaching. Just because the blue channel of the sky in an image is overexposed does not mean that you can't end up with a terrific photograph, if the subject is worthy.

Underexposure

Defining overexposure is easy, even if we have to be careful and realize that it isn't quite as simple as we would like. Defining underexposure is far more problematic.

I defined overexposure on any given pixel as the situation where any one channel is at its maximum value, generally equal to 255 with 8 bit images. OK, we can naïvely assume that underexposure is the situation where any color channel equals 0. For example:

Saint Louis University, in Saint Louis, Missouri, USA - Saint Louis University Museum of Art at dawn - naïve underexposure mask

Not much of a mask at the bottom. This is next to worthless, just a few black dots here and there, even though there is a lot of black in this image. There are several problems with our effort here, the most significant is that there is a tremendous amount of noise, relatively speaking, that is found in the darkest part of the image, often due to the quantum fluctuation of light. Light is detected in discrete quantities due to a mysterious property of matter and energy on a small scale, and therefore is quite non-uniform.  There are also several sources of noise in the camera itself, and these sources will add to the signal, moving it away from zero. Also consider the indirect problem mentioned with overexposure: image manipulation will ‘infect’ neighboring pixels, and since no pixel value can be less than zero, this averaging will only increase the value found at pixels which ought to be zero. Noise at low levels does not average out to zero, but instead will brighten dark pixels..

Some cameras, as well as RAW converters, will do plenty of image manipulation including noise reduction or black-point cutoff, making our low-value pixels even more unreliable.

Instead I use a working definition of underexposure which masks out those values near zero. Now, should I take into account all three color channels at one time, or each color channel separately? If I choose all three, then I might not mask out a particular poor, noisy channel if the other two are good.

But if I mask out each channel separately, then I might get the situation where a particular pixel is both overexposed and underexposed! I often see this with stained glass. For a particularly brilliant red piece of glass, I may have the Red channel at 255, while the Blue channel is at 0: this indicates that the color is particularly pure and outside of the color gamut of the camera or color space used in Photoshop.

There are several methods I use to mask out dark noise.  The simplest uses the image itself and the Threshold slider; this uses the RGB luminance function shown above. I examine the image while moving Threshold, and stop when a reasonable amount of noise is eliminated.  Using this process on our darkest sample image we see:

Saint Louis University, in Saint Louis, Missouri, USA - Saint Louis University Museum of Art at dawn - underexposure mask detail

The road and the top of the building on the left shows the most noise. I adjusted the Threshold slider until much of that noise is eliminated, as you can see on this detail from the lower right hand corner of the image. You don't want to do too much of this.

I also use the same process, but doing each channel independently. This makes an exceptionally clean final image, but only if we don't have the simultaneous over and under exposure problem mentioned above.

Saint Louis University, in Saint Louis, Missouri, USA - Saint Louis University Museum of Art at dawn - three channel underexposure mask

This looks like a pretty good mask. It masks out the most underexposed parts of the image while not showing too much residual noise.

There is another technique which illustrates the problem of dark noise quite dramatically. What I do is take each channel separately, and brighten it with Curves until it no longer appears to be a photograph, but rather a line and charcoal drawing. Instead of a nice apparently continuous series of tones, we get discrete steps. This shows that we don't have enough spacing between brightness levels to produce a good image. Usually this effect is most prominent in the Blue channel because normal in-camera processing greatly amplifies its shadows.

Saint Louis University, in Saint Louis, Missouri, USA - Saint Louis University Museum of Art at dawn - line-drawing effect

You may want to click on the image to see the full size version. Some image enhancement software will work quite hard to bring out detail in shadows, but this is certainly detail we don't want to emphasize — unless of course we are going for a cartoony look. As it happens in this image, the red and green channels don't quite go to zero much and so we don't get much of the line drawing effect, but the blue channel does; so our mask eliminates considerable amounts of noise.

Creating masks like these can show you how much of your image consists of high quality pixels. I use these also for creating exposure blends. Here are those four images blended together, masking out the big color shifts and dark noise:

Saint Louis University, in Saint Louis, Missouri, USA - Saint Louis University Museum of Art at dawn

It looks pretty good, and there are hardly any color shifts except for the areas which were overexposed on the darkest base image. Notably, the color on the building and the blue sign have uniform color as needed, and we have excellent detail in the shadows. The major artifact here is the sidewalk light to the right of the stairs:  it turned off between the second and third photos and so we get a strange rendering of it here. There is also some roughness along the roofline.  You can click on the image to see the full resolution version.

Conclusion

The phenomenon of color shift — when even one channel is overexposed — severely limits quality color photography.  Of course, solid studio lighting, or supplemental lighting with fill-in reflectors are frequently used by quality photographers. Or, you can blend multiple exposures, but then the problem is finding the right algorithm or software to do this.

On the contrary, this color problem implies that quality black and white photographs ought to be easier to produce. We can introduce severe changes of contrast without worrying about color shifts.

Sunday, September 5, 2010

Imaginary and Impossible Colors

STARE AT THE TOP square for a minute or more. Do not move your head, and keep your eyes right in the middle of the square.

Slowly move your eyes to the square below.

Imaginary colors

Glorious, isn't it?

You are seeing some colors that are impossible to actually portray in the real world, other than transiently as you see here. These are called “imaginary colors”. You can't make paint that shows those colors, nor can you project a color of that light on a screen, nor show it on a computer monitor. A color meter does not measure these colors.

Here is the problem.  The human eye has basically three classes of color sensors or cone cells, one generally sensitive to the red side of the color spectrum, another sensitive to blue on the other side of the spectrum, and green in the middle, along with green-blue sensitive rod cells that work most prominently in dim lighting. There are three color sensors, three only (although there may be some people — probably females only — who have four classes, and very many males, mostly, who have less than three).

There are some deep, rich red colors which do not stimulate either your green or blue cone cells.  There are some deep, rich, dark violet colors which do not stimulate green or red.  However, there are no green colors whatsoever which do not also stimulate your red or blue cells, or even both.

A camera mimics human vision by also having three classes of sensors, and like the eye, there is no color which will give a signal in the camera's green which will not also give a signal in either red or blue or both. There will be of course reds without blue and blues without red. You can examine your own RAW photos with the excellent RAW Photo Processor, set so as to do minimal processing of the image.

Unprocessed RGB

RAW Photo Processor was set with UniWB and no color space assigned, which gives basically the actual signal received by the pixels. No green colors have a dark red or blue signal. That we mathematically represent a pure bright green color in the sRGB color system as Red=0, Green=255, and Blue=0 tells us very little as to how the eye or camera senses the color: you'll never get a green signal without significant amounts of either red or blue or both.

Human color vision as it is has the potential to see these supergreen colors, unadulterated with excess red or blue. Our experiment above shows that you can actually see these colors, if only for a brief moment. Individuals with synesthesia or severe migraine headaches can see them more often.

Apparently, when we stare at a color long enough, our eyes become ‘fatigued’ and lose sensitivity to that color. Staring at the red-blue colors leads to decreased sensitivity to them — and so we can see, ever so briefly, imaginary supergreen. However, I suspect that this mechanism is responsible for the automatic white balance of the eye. We can see gray tones correctly under a wide variety of lighting, while a camera set to a fixed white balance would not, and so the eye must have some mechanism of subtracting out the color of the light.

Since we generally have only three types of color sensors in our eyes, which have well-characterized properties of light absorption, we have the basis for creating a precise mathematical model of color: and this model will have precisely three coordinates. This is despite the intense processing that goes on in our eyes and brains; processing that is hardly known at all, despite the fact that we experience it all of the time. That it is often difficult to put our experiences into words does not mean that we ought not attempt that work.

Following is a chart which represents the full gamut of real saturated colors seen by human vision:

Cie_Chart_with_sRGB_gamut_by_spigget

Image originally from Wikipedia. Source and attribution is here.

This image approximately illustrates the full range of midtone saturated colors that can be actually reproduced by paint or by colored lights. Notice the straight line between blue and red? That shows that we can in fact get pure red and blue tones, not adulterated by any green at all, along with purple and scarlet mixtures of the two colors. Notice that the hump in the curve is in the green region, which shows that there is no physical green color which is not also a bit red or blue. If a supergreen color actually existed, then this chart would be a perfect triangle.  Full human color imagination, including the supergreen colors, very likely is a triangle — for we can predict quite accurately what kind of supercolor we will see in experiments like the one above.

The color gamut shown above is only approximate in color, because the image itself is limited to the gamut of the sRGB color system, which is itself represented by the small triangle inside of the big horseshoe. The corners of the triangle represent the primary colors used by sRGB.

sRGB is quite standard, and is used by most cameras, computer monitors, web browsers, and even High Definition Television, but it can only show about 35% of all possible physical colors, and tends to be lacking in purple, green, and cyan. By using excellent quality color filters, and a bright enough light source, you can display a much wider gamut of colors — the triangle will be bigger and fill up more of the horseshoe — and for a price you can buy a high-gamut monitor that can display more colors than the puny sRGB standard.

This particular standard was chosen by Microsoft and Hewlett-Packard because it works with even cheap computer monitors, and because it uses only 8 bits of data for each red, green, and blue color channel, which was a serious limitation back in the days of expensive computer memory. This standard gives us a large enough gamut of colors, with a small enough spacing between them to avoid banding artifacts. However, I always use 16 bits when working on my pictures — even though I have to eventually reduce them to 8 when I show them on the web. (Computers, by the way, are particularly efficient at using powers-of-two when manipulating data: so we often see 4 bits, 8 bits, 16 bits, 32 bits and so forth; always multiplying by two).

To display a color on a computer or by projection, you need at least three primary colors, and the particular colors you use, and their brightness, determine the final gamut. But notice that you have to use actual, real colors for your projector — they have to be within the horseshoe — and so there will always be colors that cannot be represented. If you want more gamut you eventually will have to add more colors, which is precisely what we see with high quality color printing. This is impractical with monitors, however, which are usually limited to just three primary colors.

In a sense, the three primary colors are a bit arbitrary, and artists have used a variety of primary color systems in their theory. However, some primary color systems are better than others because they have a larger color gamut, or can represent a larger variety of basic colors. Undoubtably, the bottom of the horseshoe is rather pristine, so I would expect that most any color system ought to attempt to get as close to the bottom corners as possible: the open question is to which third color to use. Do you want good greens or good cyans? You can't have both if you use just three colors.

Note that painters use subtractive colors, so their primary colors out of necessity will have to be the opposite of what is shown here: cyan, magenta, and yellow rather than red, green, and blue. In particular, the painters' primary color palette will be especially deficient of good blues, greens, and reds. This is why some pigment colors are highly prized by artists, since they have colors that are otherwise unmixable. In the ancient Mediterranean, the most costly dyes came from various species of Murex snails, and the colors produced were down at the bottom of the horseshoe chart — with the purple or scarlet colors being used for Imperial dress, and the blue used for the fringes of Jewish prayer shawls. These colors are decidedly non-mixable — you have to obtain a pure color and cannot obtain them by mixing other colors.

If you want to represent the entire gamut of colors mathematically, using only three numbers, then you have to go outside of the bounds of the horseshoe. But then some combinations will give colors that cannot be represented by any paint or filter. But remember that a supergreen color can actually be experienced under some circumstances. However, there are some wide-gamut color systems which represent colors that cannot exist even in our imaginations, like a scarlet black or deep blue white.  As far as I know a true wide-gamut color system that includes supergreen as one of its primaries does not exist, but it would be useful if it did, since it would closely represent the entire human visual system, including our imagination, while excluding those colors which are impossible to even imagine.

Here are the most commonly used color systems:
  • sRGB:  35% of entire gamut
  • ColorMatch RGB: a bit larger than sRGB, slightly different primaries
  • Adobe RGB:  50.6%
  • Wide-Gamut RGB:  77.6%
  • CMYK: smaller than sRGB, but not complete overlap.
  • ProPhoto RGB: most of the visible color gamut, 13% of the colors are imaginary or impossible.
  • L*a*b* colorspace: 100% of visible colors, with lots of impossible colors.
If you are a digital photographer, you have to choose a color system, and the question becomes which system to use.  Now sRGB is used everywhere, and is often the only color system that will look good on output: most devices just can't do much better than sRGB, and many output devices assume that sRGB is used.  If you output a wide-gamut file on an sRGB device, its colors will be muted, giving you precisely the opposite effect you desired.

Often I hear the advice that photographers ought to set their cameras to Adobe RGB, and just as often I hear photographers complain that their photos look washed out and unsaturated because they did use Adobe RGB but didn't know how to manage it, and so I recommend using sRGB and nothing else, even though it isn't the ‘best’.

When I do have need of producing colors outside of RGB, for example when preparing images for commercial four color print, then I will use a larger gamut color system, and eventually work directly in CMYK. If you are outputting to a broad-gamut color printer that uses more than four inks, then I'd use a high-gamut color system and load the color profile of the printer into Photoshop, keeping an eye on the gamut warning feature. If you are outputting images to the web, then use sRGB.

In the philosophy of logic, we say that a statement is true if it corresponds with being, with something that actually exists in the entire realm of being. But we say that a statement is meaningful if it does not encompass a logical contradiction — a square circle is not meaningful. If you can imagine something that does not actually exist — see it with your mind's eye — then your imagination still has meaning, even if it doesn't have truth. For example, you can imagine Doberman pincers with wings; these don't actually exist in our world, but you can imagine them without contradiction. Likewise with supergreen colors, which are meaningful even if you can't have a paint of that color.

Wednesday, August 18, 2010

Nikon View NX 2

NIKON RELEASED a new version of their free photo editing software: you can get it here: View NX 2.

In my opinion, this software produces better RAW conversions than does my version of Adobe Camera RAW with Photoshop CS3. I've noticed that it has much better color consistency between differently exposed images of the same subject, which is important for exposure blending — and it produces fewer noisy color artifacts along high-contrast edges. The colors also look much better, and the automatic color aberration removal is a time-saving feature.

But I can't wait to try ACR on Photoshop CS5 — when I can afford it.

A huge problem is that the old View NX is very slow, primarily because it uses lots of memory. The new NX 2 looks much better and has excellent performance. Now I can have all my applications open at one time without my computer grinding to a halt.

I use the D-Lighting feature all the time to boost shadows, but the old version just had a few settings — and the low setting often didn't work at all, or made things worse. The new version has a very smooth slider for continuous adjustment.

UPDATE: There appears to be a bug in the software.  When I make edits to a RAW file and save them, I do an output to TIFF, which then I pull into Photoshop for further editing.  THIS DOES NOT WORK. The TIFF file has the original RAW image, without the edits. Not good.

CORRECTION: The bug I saw is due to me previously editing the image in Adobe Camera RAW. This creates a sidecar file, which apparently confuses ViewNX.  If I delete that sidecar file — it has the file extension .xmp — ViewNX works properly.