tag:blogger.com,1999:blog-87683752964753490322024-03-13T15:58:28.218-05:00The Refracted LightA website about the Art and Science of PhotographyMark S. Abelnhttp://www.blogger.com/profile/06692448528819277158noreply@blogger.comBlogger87125tag:blogger.com,1999:blog-8768375296475349032.post-8427521977492136072015-04-19T22:44:00.000-05:002015-05-05T00:52:24.171-05:00Are there cameras with good skin tones?<span style="font-size: x-large;">A PHOTOGRAPHER ASKS: </span><span style="font-size: large;">“What do they mean by a camera with a good skin tone? Is it subjective?”</span><br />
<br />
Well, a camera doesn’t have skin, but some cameras render the color of human skin better than others.<br />
<br />
There are five factors that go into healthy <a href="http://en.wikipedia.org/wiki/Human_skin_color">skin color</a>:<br />
<ul><li>The base connective tissue, which is slightly bluish pink. This contributes to the skin coloring of the very palest individuals, including albinism, where melanins are absent.</li>
<li>Blood, which usually gives a red hue due to the capillaries near the skin surface; this can be variable — less in the cold or during fright, and prominent during exercise. Anemia can change blood color, giving pale individuals an ashen appearance. </li>
<li>Pheomelanin is reddish in color and is the main component in the coloring of redheads. It is found in larger quantities in most females, and is found in greater concentration in the lips, nipples, and sex organs.</li>
<li>Brown eumelanin — a dark yellow color, actually —leads to blonde hair and yellow and olive skin. When coupled with pheomelanin, it gives brown hair. </li>
<li>Black eumelanin which is colorless and is present in dark skin and in black and grey hair.</li>
<li>Eumelanins are typically found in lesser quantities in females than in males.</li>
<li>The relative proportions of the melanins changes with age, and may be blotchy, enhanced due to sun exposure. Some diseases may also change skin color beyond the normal range determined by the factors listed above, making nontypical skin color a good indicator of illness.</li>
</ul>These combine in various proportions, with melanin predominating where present in significant quantities. Due to these limited number of factors, the range of healthy human skin hue is somewhat narrow, about 30 or 40 degrees of the entire 360 degree <a href="http://en.wikipedia.org/wiki/HSL_and_HSV">HSB</a> color circle and narrower for most of the population. The typical human eye is pretty sensitive to changes in those hues because that is where two of the eye’s cone cell light receptors, the L and M cells, strongly overlap and where color differentiation is particularly sensitive; this strong differentiation of hue is also useful for determining the ripeness of fruit as it changes from green to yellow, orange, or red.<br />
<br />
For human skin, with a relatively flat light source spectrum and neutral white balance, in sRGB numbers, Red > Green > Blue, for all ethnicities except for the very lightest and darkest where blue might be equal to or slightly greater than green. With rare exceptions, green is always less than red and never exceeds it. If at the very least, a camera can’t reliably give us those numbers, then the camera can’t give us good skin tone.<br />
<br />
If a camera’s automatic white balance algorithm can’t be relied on to work well, then a camera won’t give us good skin color. This process is used to subtract out the color of the light, and some cameras are better than others. Significantly, some camera models will detect faces and use the narrow skin hue range as a means of adjusting white balance.<br />
<br />
But for this reason, many photographers will use a manual white balance, taking that one variable out of the equation.<br />
<br />
Exposure becomes a significant problem for skin for the very lightest and darkest of individuals. It is easy to overexpose the red channel for pale individuals — turning the skin hue more bluish or greenish — and underexpose the blue and green channels for the darkest individuals, making their skin tone look too saturated red. So if a camera doesn’t give good exposure there will be problems with skin color.<br />
<br />
But for this reason, many photographers will keep a close eye on exposure, checking their camera’s three-color histograms. Modern cameras with a high dynamic range can better avoid this over and underexposure.<br />
<br />
Most young healthy humans have a skin hue that ranges from a slight bluish-red or rose color through yellowish-orange depending on ethnicity and sun exposure, and these skin hues will vary slightly for each individual, as some parts of the skin are more changed by sun exposure than others, and there is blotching also. So the structure of the red and green channels are critical for getting good skin tones. Certainly those two channels must have a significant overlap, but what is critical is that those channels ought to change strongly in sensitivity over the range of human skin hues — not so much of a change that there is an abrupt cutoff in hue (this used to be a problem in videocameras), but not so little that subtle differences are missed, which is a common problem these days.<br />
<br />
Due to the process of <a href="http://en.wikipedia.org/wiki/Metamerism_(color)">metamerism</a> failure, it is entirely possible that a camera won’t be able to distinguish two colors that are distinct to the typical human eye, particularly under some light sources, or, the camera will render differently two surfaces that appear to be identical to the eye. This problem is uncorrectable by the camera. The best that you can do is to select the specific surfaces in your software and force them to be a different color. If your camera has a strong metameric mismatch with skin colors you are in big trouble. A striking and even a bit creepy example of metameric failure is found in cameras that have some ultraviolet sensitivity: melanin in human skin absorbs ultraviolet to protect the tissues below, and so photographs of people with these cameras shows lots of dark blotches from this melanin.<br />
<br />
There are a number of Canon and Nikon photographers who prefer using their older cameras for portraiture — generally those made before about mid-2008 — because they say that these deliver better skin color. After that time, these camera makers widened the response of their color channels in order to provide better high ISO performance. But this is a trade-off since subtle distinctions in skin color might be lost. Some of the newer cameras instead deliver blotchy skin hues due to an abrupt change in detected hue while the older cameras detect more intermediate hues. I ought to note that newer models of some other brands have kept good skin color ability.<br />
<br />
It is here that good software can help. Converting raw sensor data into a JPEG is a conceptually complex process that is quite error-prone and subject to trade-offs. One of these steps in raw conversion is the color profile which converts the limited spectral data captured by the camera to a standard color space such as sRGB. Designing a good color profile for a camera seems to be more of an art than a science. Some color profiles may shift hues according to lightness, leading to uneven skin color, while a profile which doesn’t do this will have to sacrifice some hues in favor of others in terms of accuracy, which may harm captured skin color. Both Nikon and Canon provide camera profiles specifically designed for portraiture — but better accuracy in skin color is obtained at the expense of less accuracy in other colors.<br />
<br />
As modern cameras have a great color depth, we have lots of data to process. We can try to force more overall average color accuracy if needed, but this might end up with more noisy or blotchy colors from some cameras. We can use noise reduction, especially chroma blur to help, and blurring the colors on human skin is more acceptable than blurring luma noise, which gives the plastic skin look. Here a good raw converter, one that uses a significantly greater amount of mathematical precision and advanced algorithms for processing, can help quite a bit. But this precision is a trade-off: I frequently use some good raw processing software, but it can take a minute or more to process an image, something that would be completely unacceptable in a consumer camera which needs fast response time.<br />
<br />
Finally, all of us as we age become grayer, as our skin and hair hues become less saturated. Some cameras and software can attempt to counteract this by processing skin hues to be more saturated, and perhaps shifting the hues towards red — which won’t work for all ethnicities. Also, there are some skin conditions that lead to a blotchy look, and so some retouchers will correct for this. Makeup can be problematic if they hues used don’t match up well with the base skin tone — and we are increasingly seeing metameric failure for makeup versus skin color under LED lighting.<br />
<br />
So there are many factors that can lead to a camera having good skin tone. Some are under the control of the photographer, others might need special software, and still others are beyond any in-camera correction and so need a good camera, known for good skin color, to begin with.Mark S. Abelnhttp://www.blogger.com/profile/06692448528819277158noreply@blogger.com3tag:blogger.com,1999:blog-8768375296475349032.post-17610394100409805372014-12-07T05:59:00.000-06:002015-04-19T22:55:06.332-05:00An Expensive and a Free Way of Matching Screen and Print<span style="font-size: x-large;" x-large="">A PHOTOGRAPHER ASKS:</span> <i>“When I make a print, it looks duller, grayer, and more bland than what I see on my computer monitor, even when I’m using soft proofing. Do you have any tips or advice on making the computer image match my prints?”</i><br />
<br />
The range of colors — or gamut — that can be printed on a typical color printer is less than what can be displayed on a computer monitor, and so Photoshop and some other image editing software packages include a ‘soft proof’ tool which limits the colors on the screen to match the printer gamut. While this is a good way of checking color limits, very often people complain that the prints are much darker that what is seen on the screen. But this is to be expected, yes? You have a nice bright monitor lit by powerful lamps behind the screen while your prints are viewed under whatever dim lighting you might have in the room where you have your computer. Comparing these side-by-side is going to be disappointing.<br />
<br />
The soft proofing will only be close to accurate if the brightness of what you see on the screen matches the brightness of the print — where the brightest white on the screen is equal to the brightest possible white you can see on a print. This is usually not the case with many monitors, which even at their dimmest setting is far brighter than typical ambient home of office lighting conditions.<br />
<br />
However, it is considered good practice to <span bold="" data-mce-style="“font-weight:" style="font-weight: bold;">turn down your monitor brightness</span> enough to allow both comfortable editing and good print matching. This will probably get you 80% of the way towards good soft proofing and it costs you nothing. Any more physical accuracy will increase your costs and decrease your convenience dramatically.<br />
<br />
Now if you turn up your lights in the computer’s room — or turn down the monitor brightness by a lot — then you might have so much glare on your screen to make editing it difficult. To correct for this, some folks will put a shield around their computer monitor, black on the inside, preventing much stray light from the room from hitting the screen — this is like a lens hood. You might be able to make one yourself out of cardboard and black spray paint.<br />
<br />
This still might not allow a good match in brightness between your monitor and print, because as mentioned it might be quite impractical and undesirable to have the room brightness match your monitor brightness. In this case, imaging professionals will often use a ‘proof light box’, a good-sized enclosure where you can put your print, which has a number of presumably precisely-specified lamps inside of it which can be adjusted for brightness, and so can match the brightness of the monitor.<br />
<br />
Using a light box allows for practical monitor brightness settings as well as desirable room brightness. However, this will not work well if the color of the lamps doesn’t match the monitor. The brightest white on the monitor ought to match the color of a pure white object in the light box, not only in brightness but in overall color cast, and so the right lamps need to be selected — but be aware that changing the brightness might very well change the color temperature of the lamps (common with incandescent lamps) —making your selection considerably more difficult.<br />
<br />
However, you still may have a problem. A computer monitor has a multitude of red, green, and blue dots of colors, which can be mixed together in fine proportions in order to produce millions of colors of relatively good accuracy. On the contrary, the colors of a print are going to be strongly influenced by the spectral qualities of the lamp used to view it. If you don’t use a specially-made spectrally accurate lamp, the colors will very likely be different — sometimes greatly so — between the monitor and print, even though the tonality of dark and light neutrals might look the same. This problem is called metamerism failure.<br />
<br />
This still might not give you an accurate match. The sRGB standard which defines the most common data format used in digital images specifies that images should be viewed surrounded by a dark gray surround — which will cause the eye to perceive shadow tones as being brighter than if they are surrounded by a white background. Photoshop does this normally, and your light box ought to have the same shade of gray surrounding your print. However, if you will eventually view your photo in a frame with a white matte around it, then you might want to edit the image with a brighter surround — and likewise evaluate the print with the same brightness surround. Also, be sure the view the print and computer image at the same size and distance. <br />
<br />
Human eyes constantly adjust themselves to the lighting conditions, and so if there is a strong color cast in the room — say from bright, saturated paint on the walls — then your eyes will adjust, neutralizing the color a bit. This adjustment will effect your evaluation of the images, and will change your impression of the print under ambient conditions in the room, more than what you see on a bright monitor. For this reason, unsaturated colors in the computer’s room is desirable, and a medium gray is even more desirable.<br />
<br />
Getting a good visual match between the monitor and print is going to be difficult and expensive. But there is an alternative. What I do is <span data-mce-style="“font-style:" italic="" style="font-style: italic;">measure</span> the brightness of the various parts of the image, using Photoshop’s dropper tool and by analyzing its histogram, and I adjust the values to give me what I know will be good values in the final print. Basically, I know, based on the measured color numbers on the digital image, what the colors will look like in the final print. I know that I don’t want the shadows to be too dark and adjust accordingly. If I need saturated colors, then I’ll use proofing and adjust the image to give me good bright colors without blowing out any of the ink values, which would lead to loss of detail and texture as well as shifts in hue. This ‘by the numbers’ method is accurate and highly predictable, and I really don’t need an accurate visual match on my screen. This is inexpensive but quite accurate, if somewhat difficult to do well. It also has the advantage that I <span data-mce-style="“font-style:" italic="" style="font-style: italic;">know</span> that my colors are right, even if I’m not seeing right at any given time, like when I’m tired.Mark S. Abelnhttp://www.blogger.com/profile/06692448528819277158noreply@blogger.com1tag:blogger.com,1999:blog-8768375296475349032.post-91588801983673755812014-05-05T02:56:00.000-05:002014-05-05T02:56:34.219-05:00Color Trek<span style="font-size: x-large;"><i>“P’tak!”</i></span> shouted the Klingon, as he drove his dagger into the computer monitor, causing a shower of sparks to shoot across the bridge of the starship. <i>“Nga’chuq</i>ing <i>pjqlod</i> of a<i> sli-vak!”</i> He proceeded to stomp the remaining bits of advanced computer hardware into tiny pieces, cursing furiously.<br />
<br />
Commander Riker, stroking his clean-shaven chin, asked, “Hey Worf! Is something the matter?”<br />
<br />
“Stupid Earth technology! <i>VeQ!</i> We almost lost the number 1 core because of that idiotic monitor. Overload conditions are to be displayed in red,” Worf growled, looking both hurt and angry, as if he was personally disrespected, “but that <i>mIghtaHghach</i> display is ambiguous. I can hardly tell a normal condition from abnormal.”<br />
<br />
“Well, it looked pretty red to me,” replied Riker, “and I thought that I probably should have mentioned it to you, but you know how you get whenever you are contradicted.” Then he thought to himself, “<i>Maybe I ought to grow a beard. I’d get a lot more respect that way.</i>” Riker, turning to a pale android who seemed to have ignored the recent outburst of violence, asked “Data, any idea what is going on here?”<br />
<br />
Lieutenant Commander Data showed his typical puzzled robotic expression. “I wish I could help you,” he replied, “but Ensign Crusher is siphoning off 98.7% of my positronic CPU capacity in order run an <i>otome gēmu</i> simulation. Please wait; process terminating.” [An anguished cry is heard from the other side of the bridge: “Noooooo!”]<br />
<br />
“Let me download some pertinent information. OK. Lieutenant Worf, you stated that you were unable to distinguish between overload and normal conditions on the monitor, based on the status color. —All right, by your expression of anger I can assume the affirmative. Commander Riker, you state that the status condition color was red as expected for a core overload condition.” Data nodded seriously. “I think I know what the problem is.”<br />
<br />
“Let me demonstrate.” Data randomly selected an image from the starship’s database, displaying it on the forward screen, an image showing Counselor Troi having too much to drink at a party. “Commander Riker, how does this image look to you?” He paused. “Commander Riker?”<br />
<br />
“Yeah, she looks really great,” he replied, “Could you put a copy of that in my personal files?”<br />
<br />
“Sir, I am asking you about the color rendition of the display. Do the colors look accurate to you?”<br />
<br />
“Yes. Fine. Looks great.”<br />
<br />
“Lieutenant Worf, how does this image look to you?”<br />
<br />
“Like many Earth displays, it is far too bright — and maybe, what? fuzzy?”<br />
<br />
“Would you say that the monitor has low contrast? That it does not render black tones well?”<br />
<br />
“Yes. Exactly,” replied the Klingon.<br />
<br />
“Klingon eyes,” explained Data, “are similar to human eyes in that they have a long-wave class of photon receptors which are generally sensitive to the red part of the spectrum, however, the sensitivity extends into what humans call the invisible ‘infrared’ — but of course, to Klingons, that part of the spectrum is simply ‘red’, indistinguishable from what a human would call red.”<br />
<br />
The android continued, “Many Human displays, such as this one, emit a significant amount of uncontrolled infrared radiation, visible to Klingons, but not humans. This both makes the display excessively bright to Klingon eyes, as well as low in contrast. Also, I can assume that all humans, to Klingon eyes, look pale, due to the transparency of human skin to infrared.”<br />
<br />
“Yes,” replied Worf, “they all look alike to me.” He addressed Lieutenant Commander La Forge: “But I always recognize you from your VISOR.”<br />
<br />
“Lieutenant,” continued Data, “what about the colors on the screen? How do they look to you?”<br />
<br />
“They are bad. Wrong. Filthy. I always have a nagging suspicion that humans choose wrong colors because they are weak, decadent aesthetes, or that they are doing that simply to outrage my people.”<br />
<br />
“You know that isn’t true. Perhaps you could describe to us the colors of the spectrum? As you see them?”<br />
<br />
“Of course. Red, orange, yellow, green, <i>kth’arg</i>, and blue. Every child knows that.” <br />
<br />
“<i>Kth’arg</i>?” asked Riker. “Not cyan?”<br />
<br />
“Yes, <i>kth’arg</i>,” snarled the Klingon. “Yes, cyan is a mixture of green and blue; I know that and I see that. But <i>kth’arg</i> is not cyan. <i>Kth’arg</i> is <i>kth’arg</i>. That photograph up there,” he pointed to the screen, “has colors that ought to be <i>kth’arg </i>but are blue, and are <i>kth’arg</i> instead of green. Things that are orange in real life appear to be red, and things that are yellow are shown as orange.” His hand moved slowly towards his dagger.<br />
<br />
“Human eyes,” said Data, “have three general classes of light receptors, notionally identified as being sensitive to red, green, and blue light, or more accurately, to long, medium, and short wavelengths of light respectively. Klingons have a weak fourth class of color receptors which have a peak sensitivity to wavelengths between the ‘green’ and ‘blue’ receptors, which leads to the <i>kth’arg</i> sensation. As such, it is not a translatable or perceivable color to humans. Also, I might note that Klingons, due to the downwardly-shifted sensitivity of their red receptors, are unable to distinguish between blue and spectral violet — they literally are the same color to them.”<br />
<br />
“That foul monitor,” said the Klingon, pointing to the pile of smoldering components, “was supposed to display normal condition with a green color, but it looked to me like a orangish <i>kth’arg</i>, while abnormal conditions ought to be shown in red, but instead looked like a <i>kth’argy </i>orange. Those colors are almost identical. If I hadn’t double-checked the core status, we would have all been dishonorably slain by now.”<br />
<br />
“Now wait,” replied Riker, “I don’t understand this. Aren’t we using the latest equipment? How can this go wrong?”<br />
<br />
“The problem is known as metamerism, where many wavelength combinations are perceived as the same color” replied the android. “My own ocular sensors are multispectral — I can distinguish among many narrow spectral bands of light — but these perform poorly under dim lighting conditions such as we experience here on the bridge, and so my vision module can combine many narrow bands into several large overlapping ones, similar to human vision. This gives me <i>some</i> color distinction while greatly reducing the amount of photonic noise in my sensors. While I have less data to go on, it usually hardly matters, and it greatly reduces the burden on my CPU from vision processing. But what this means is that there is an infinity of combinations of wavelengths that deliver the same perceived color. You might perceive a single wavelength as ‘cyan’, but any number of combinations of blue and green wavelengths will give you the same perception — that is metamerism. As it so happens, that single wavelength you perceive as ‘cyan’ is perceived by Lieutenant Worf as <i>kth’arg</i>, while both you and he both perceive combinations of green and blue as cyan. This is a metamerism failure.”<br />
<br />
“So?”<br />
<br />
“Our Federation Standard displays use three primary colors — red, green, and blue — which when mixed together in varying proportions can deliver a significant fraction of all colors seeable by the typical human eye. But don’t forget, there are an infinity of combinations of wavelengths of light that will be perceived — by a human — as the same primary color. The particular green color used in Lieutenant Worf’s late display — to him — appears to be more of a <i>kth’arg </i>color and not green at all. Another display — identical to you — might appear to be quite different to him, simply because the primary colors use a different metameric mix of wavelengths. You see Sir, our displays — with three primary colors — are designed specifically for <i>human</i> vision. A Klingon display needs at least four primary colors — including an infrared component — with a metamerism tailored for Klingon vision.”<br />
<br />
“Commander Riker, Sir,” requested Data, “may I suggest that we requisition a Klingon-specific display for Lieutenant Worf’s use.”<br />
<br />
[Three months later…]<br />
<br />
“So Worf,” said Riker, confidently stroking his new beard, “how is the new display?”<br />
<br />
“It is glorious! I shall compose a song praising its beauties.”<br />
<br />
“Looks like garbage to me,” laughed the Commander as he walked away.Mark S. Abelnhttp://www.blogger.com/profile/06692448528819277158noreply@blogger.com0tag:blogger.com,1999:blog-8768375296475349032.post-69641558885223574392014-04-12T16:27:00.000-05:002014-04-12T16:31:03.273-05:00Cook Your Own Raw Files, Part 3: Demosaic Your Images<span style="font-size: x-large;">AS SOON AS</span> I wrote the following lines at the end of the last article in this series, I regretted it; I wasn't sure why, but I had a nagging feeling that it was too dismissive:<br />
<blockquote><i>Logically, I am presenting demosaicing as the first step in processing a raw file, but this is not necessarily the best thing to do — I am simply describing it now <b>to get it out of the way</b>.</i></blockquote>The article is <a href="http://therefractedlight.blogspot.com/2014/03/cook-your-own-raw-files-part-2-some.html"><i>Cook Your Own Raw Files, Part 2: Some Notes on the Sensor and Demosaicing</i></a>.<br />
<br />
By trying to “get it out of the way,” perhaps I wasn't thinking about my readers, who might get something out of attempts to do demosaicing themselves. It is an important process that goes on in the camera, something that needs to be done <i>right</i>. The purpose of this series is to demonstrate the various steps that a digital camera or raw processor might go through in producing an image, for the education of photographers and retouchers. Simply omitting an important step because it is difficult is not helping anyone.<br />
<br />
In the first article in the series, <a href="http://therefractedlight.blogspot.com/2014/02/cook-your-own-raw-files-part-1.html"><i>Cook Your Own Raw Files, Part 1: Introduction</i></a>, I mentioned that I use <a href="http://www.raw-photo-processor.com/">Raw Photo Processor</a> to produce lightly-processed images that can be used to experiment with the kinds of processing that goes on in raw converters. But this is a Macintosh-only application; and so, for users of Windows, Linux, and other computer operating systems I suggested using <a href="http://www.cybercom.net/~dcoffin/dcraw"><span style="font-family: Courier New, Courier, monospace;">dcraw</span></a>, but I hardly knew much about it.<br />
<br />
But I recently discovered that <span style="font-family: Courier New, Courier, monospace;">dcraw</span> can produce <i>undemosaiced</i> images; while this is in the <a href="http://www.cybercom.net/~dcoffin/dcraw/dcraw.1.html">documentation</a>, I overlooked it. This feature can allow interested persons the ability to easily try out the demosaicing process themselves. You can get your own free copy of <span style="font-family: Courier New, Courier, monospace;">dcraw</span> at <a href="http://www.cybercom.net/~dcoffin/dcraw/">http://www.cybercom.net/~dcoffin/dcraw/</a>; it is a command-line utility, which makes its use more difficult for those computer users only familiar with typical graphical interfaces, but a bit of effort in trying this out can be worthwhile. The command to convert a file is thus:<br />
<br />
<span style="font-family: Courier New, Courier, monospace;">dcraw -v -E -T -4 -o 0 DSC_5226.NEF</span><br />
<br />
Where ‘<span style="font-family: Courier New, Courier, monospace;">DSC_5226.NEF</span>’ is the camera raw file that you want to convert. You must issue the command in the same directory or folder where the file is located, or otherwise supply a path to the file. The command will output a TIFF file in the same directory: in this instance, dcraw will output <span style="font-family: Courier New, Courier, monospace;">DSC_5226.tiff</span>. <br />
<br />
The options used have this meaning:<br />
<ul><li><span style="font-family: Courier New, Courier, monospace;">-v </span>Verbose output — the command will display extra text which might be useful in our further processing.</li>
<li><span style="font-family: Courier New, Courier, monospace;">-E </span>Image will not be demosaiced; also, pixels along the edges of the sensor, normally cropped by the camera, are retained.</li>
<li><span style="font-family: Courier New, Courier, monospace;">-T </span>Output an image conforming to the Tagged Image File Format (TIFF) is output. Many common image editors can read these lossless files.</li>
<li><span style="font-family: Courier New, Courier, monospace;">-4 </span>A 16-bit image is produced instead of the more common 8 bit image files, which gives us more accuracy, without any <a href="http://en.wikipedia.org/wiki/Gamma_correction">gamma conversion</a> or white level adjustments. This gives us a dark, linear image.</li>
<li><span style="font-family: Courier New, Courier, monospace;">-o 0 </span>This is a lower-case letter ‘o’ and a zero. This turns off the adjustment of colors delivered by the camera; this option will deliver uncalibrated and unadjusted color.</li>
</ul><div>These settings will give us an image file which most closely represents the raw data delivered by the camera. Also, I ought to note that a white balance won't be done by dcraw, turning off the mechanism which compensates for the color of the light illumining the scene in the image.</div><br />
Here is a view of some bookshelves in my office, from approximately the same angle of view that I see from my computer:<br />
<br />
<a href="https://www.flickr.com/photos/msabeln/13702090833" title="Messy bookshelf - Adobe Camera Raw developed by Mark Scott Abeln, on Flickr"><img alt="Messy bookshelf - Adobe Camera Raw developed" height="331" src="https://farm4.staticflickr.com/3668/13702090833_c0bd13f165.jpg" width="500" /></a><br />
<br />
The color and tonality in this photograph on my calibrated monitor look pretty close to what I see in real life — maybe some of the bright yellows are a <i>bit</i> off. Otherwise this is a suitable image taken with reasonably good gear and technique. I will use this image as the sample for our further processing in this article. You can get a copy of this original Nikon raw NEF file <a href="https://drive.google.com/file/d/0BxoRrNrmaYrIcmdSQlVhTXVrX28/edit?usp=sharing">here</a>.<br />
<br />
OK, I ran this image through <span style="font-family: Courier New, Courier, monospace;">dcraw</span> using the settings shown above, and I get this result:<br />
<br />
<a href="https://www.flickr.com/photos/msabeln/13702419774" title="DSC_5492 - undemosaiced by Mark Scott Abeln, on Flickr"><img alt="DSC_5492 - undemosaiced" height="329" src="https://farm8.staticflickr.com/7311/13702419774_f1df945d46.jpg" width="500" /></a><br />
<br />
Not much to see here! If we take a closer look at the file:<br />
<br />
<a href="https://www.flickr.com/photos/msabeln/13702080123" title="DSC_5492 - undemosaiced, detail by Mark Scott Abeln, on Flickr"><img alt="DSC_5492 - undemosaiced, detail" height="750" src="https://farm8.staticflickr.com/7163/13702080123_16a2c2dd7c_o.png" width="500" /></a><br />
<br />
We can see that it is monochrome, extremely dim, and has the camera's mosaic pattern on it. You can download a copy of this processed file <a href="https://drive.google.com/file/d/0BxoRrNrmaYrIOVAzSzlZSHVUX2c/edit?usp=sharing">here</a>.<br />
<br />
The image is so dark partly because my Nikon delivers 14 bit raw images — while the image format itself is 16 bits, and so we have unused brightness numbers. If you are unfamiliar with bits, you might want to review the Wikipedia article on <a href="http://en.wikipedia.org/wiki/Binary_number">binary numbers</a>. Basically, the number of bits in this case is a measure of how many levels of brightness is represented in the raw file. If you add one bit of depth to an image, you double the number of levels of brightness:<br />
<ul><li>If you have a one bit-depth image, you only have two levels of brightness — white and black.</li>
<li>In a two bit image, you have four levels of brightness — black, dark gray, light gray, and white.</li>
<li>Three bits gives 8 levels, four bits give 16 levels, five gives 32 and so forth.</li>
<li>A 14 bit image has 16,384 levels, and a 16 bit image has 65,536 levels.</li>
</ul>Each additional bit doubles the number of levels, and since my camera has 14 bits, it has one quarter the number of brightness levels that can be found in a 16 bit image. Since the dcraw command delivered a linear image — where ‘linear’ means that a pixel with double the exposure will have double the brightness number — we can correct for the camera's lack of a full 16 bits by using the Levels tool in Photoshop. Here is the original histogram of the image:<br />
<br />
<a href="https://www.flickr.com/photos/msabeln/13703410714" title="Original histogram of DSC_5492 by Mark Scott Abeln, on Flickr"><img alt="Original histogram of DSC_5492" height="229" src="https://farm4.staticflickr.com/3717/13703410714_2fb2e6b89e_o.jpg" width="349" /></a><br />
<br />
When we process an image, we need to use all of the 16 bits, because white is defined as the brightest 16 bit number.<br />
<br />
Understand that with a linear image such as this, the entire right half of the histogram is brightness levels associated with the 16th bit; and half of what is left is data associated with the 15th bit. So to scale the 14 bits data we have, so as to use all 16 bits, we can chop off the part of the data that isn't used by Nikon — the top three quarters of the histogram. Photoshop's Levels tool only gives us the ability to adjust 256 levels of brightness — 0 is black and 255 is white — even when working with a 16 bit file, but for our purpose this isn't a problem. So the 16th bit takes up the upper half of the data, the top 128 levels, from 128 to 255, while the 15th bit takes up the range from 64 to 127.<br />
<br />
So for a 14 bit camera, we need to set levels to 63:<br />
<br />
<a href="https://www.flickr.com/photos/msabeln/13703813734" title="Corrected histogram and levels of DSC_5492 by Mark Scott Abeln, on Flickr"><img alt="Corrected histogram and levels of DSC_5492" height="593" src="https://farm3.staticflickr.com/2863/13703813734_e955ec8b13_o.png" width="349" /></a><br />
<br />
This use of Levels gives us the same results as if we multiplied all of our image numbers by 4.<br />
<br />
Notice how the histogram shows that there is some image data going all of the way across — although, for other reasons, often won't quite touch the right hand side. Be aware that linear images tend to have most of their data clustered around the darkest, or lefthand part of the histogram — this is normal. JPEGs delivered by cameras, or images viewed on the Internet, have a gamma correction applied to them, which is a kind of data compression that assigns more color numbers to shadows and mid-tones at the expense of highlights. This actually works out well, but adds mathematical complexity, which I hope to cover later. This will give us a nice, usable histogram where most of the values are typically clustered around the middle instead of way down at the left hand edge.<br />
<br />
Many cameras are 12 bit, and so the Levels would have to be set at 15 — which would give us the same results that we would get by multiplying all of the values in the image by 16.<br />
<br />
<a href="https://www.flickr.com/photos/msabeln/13703686015" title="DSC_5492 - undemosaiced, scaled by Mark Scott Abeln, on Flickr"><img alt="DSC_5492 - undemosaiced, scaled" height="329" src="https://farm8.staticflickr.com/7120/13703686015_264d5188a7.jpg" width="500" /></a><br />
<br />
Our image is now brighter and we can actually see the subject tolerably well.<br />
<br />
Now Photoshop is hardly the best software to do demosaicing, but it <i>can</i> be done with many cameras. The first thing we need to do is to identify the mosaic pattern used by the camera— there are a wide varieties of patterns used in the industry, but fortunately, there are a few that are commonly used. One major exception is Fuji, which often uses innovative patterns in their cameras. Sigma, which uses Foveon X3 sensors, do not have a pattern, and so this entire discussion on demosaicing is irrelevant.<br />
<br />
My Nikon camera uses the RGGB pattern, where, starting in the upper-left hand corner of the image, we have a pattern that looks like this:<br />
<br />
<a href="https://www.flickr.com/photos/msabeln/12984532283" title="Array by Mark Scott Abeln, on Flickr"><img alt="Array" height="500" src="https://farm4.staticflickr.com/3482/12984532283_c484b09867.jpg" width="500" /></a><br />
<br />
We have a repeating pattern of 2x2 blocks, with red in the upper left hand corner, and blue in the lower right hand corner.<br />
<br />
You will have to look up the pattern used in your camera, or simply use my example files linked above. If you have clever computer programming skills, you might even be able to parse <a href="http://www.cybercom.net/~dcoffin/dcraw/dcraw.c">the dcraw.c source code</a> for clues — all of the supported camera patterns are encoded in the file. However, be aware that not all mosaic patterns can be decoded by my method, which assumes the RGGB pattern. However, if your camera pattern is a rotation of this, you might be able to rotate your image to get it to fit — for example, I demosaiced a Panasonic camera raw file, which has a BGGR pattern, simply by rotating the image 180 degrees.<br />
<br />
First duplicate the undemosaiced file in Photoshop, and convert it to the standard <i>sRGB IEC61966-2.1</i> color space. The color space isn't yet important, but it will help you see what is going on in the processing. Now this conversion will mess with the tonality of image, and so I select each of the three color channels separately, and do an <span style="font-family: Arial, Helvetica, sans-serif;">Image</span>-><span style="font-family: Arial, Helvetica, sans-serif;">Apply Image…</span> command to put the original grayscale values into the new RGB image.<br />
<br />
Then I create a 2x2 pixel image which duplicates the array pattern:<br />
<br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi5hpn95hCU549d7u24xBxA8MaMBirDmQ1pcHmLzLFEOtSjw7ouGDsPwwa7aGhjO4L07_zvn8YKvDgzJ98x5HlkBDt755VL-Sp-nVtc0bnJMZ9L3cn_H-j1w4h7UgPYsVu3iC6bUqymGdQ/s1600/rggb+pattern.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi5hpn95hCU549d7u24xBxA8MaMBirDmQ1pcHmLzLFEOtSjw7ouGDsPwwa7aGhjO4L07_zvn8YKvDgzJ98x5HlkBDt755VL-Sp-nVtc0bnJMZ9L3cn_H-j1w4h7UgPYsVu3iC6bUqymGdQ/s1600/rggb+pattern.png" /></a></div><br />
You see that little dot, right? That is the 2x2 image. Here is a bigger version:<br />
<br />
<a href="https://www.flickr.com/photos/msabeln/13709487723" title="rggb pattern - large by Mark Scott Abeln, on Flickr"><img alt="rggb pattern - large" height="200" src="https://farm8.staticflickr.com/7316/13709487723_9c25590355_o.png" width="200" /></a><br />
<br />
Then I select the menu <span style="font-family: Arial, Helvetica, sans-serif;">Edit</span>, then <span style="font-family: Arial, Helvetica, sans-serif;">Define Pattern…</span>, then give it name.<br />
<br />
On the new image, I create a new layer, go to <span style="font-family: Arial, Helvetica, sans-serif;">Edit</span>, then <span style="font-family: Arial, Helvetica, sans-serif;">Fill…</span> and then <span style="font-family: Arial, Helvetica, sans-serif;">Use: Pattern</span>, and then select my new Custom Pattern. The new layer will be filled with the repeating mosaic pattern. I turn this layer off so we don't have to see it.<br />
<br />
Then I duplicate the image into three layers, which I name red, green, and blue, and put a layer mask on each. <br />
<br />
On the red layer mask, I apply the red channel of the color pattern, using <span style="font-family: Arial, Helvetica, sans-serif;">Image</span>, <span style="font-family: Arial, Helvetica, sans-serif;">Apply Image…</span>:<br />
<br />
<a href="https://www.flickr.com/photos/msabeln/13706211923" title="Apply pattern mask by Mark Scott Abeln, on Flickr"><img alt="Apply pattern mask" height="299" src="https://farm8.staticflickr.com/7224/13706211923_e2f3d7534d_o.png" width="474" /></a><br />
<br />
This gives us only the red bits of the color filter array in this layer. I do the similar action for the other layers. Then, I double click on the layer name in the Layers tab, and this comes up:<br />
<br />
<a href="https://www.flickr.com/photos/msabeln/13706206215" title="Layer style by Mark Scott Abeln, on Flickr"><img alt="Layer style" height="408" src="https://farm8.staticflickr.com/7295/13706206215_86b938d3b0.jpg" width="500" /></a><br />
<br />
I select only the R channel, unchecking G and B; this turns the layer into the red color, and then I do the same for the blue and green layers, selecting only the corresponding channel to the layer. When I fill the bottom original layer with black, this results:<br />
<br />
<a href="https://www.flickr.com/photos/msabeln/13709704415" title="DSC_5492 - color filter array by Mark Scott Abeln, on Flickr"><img alt="DSC_5492 - color filter array" height="329" src="https://farm8.staticflickr.com/7325/13709704415_750177f829.jpg" width="500" /></a><br />
<br />
It appears to be a full-color image, albeit with a very bad white balance; but if we examine a small part of it:<br />
<br />
<a href="https://www.flickr.com/photos/msabeln/13706530733" title="DSC_5492 - color filter array, detail by Mark Scott Abeln, on Flickr"><img alt="DSC_5492 - color filter array, detail" height="750" src="https://farm4.staticflickr.com/3794/13706530733_4f5b258d42_o.png" width="500" /></a><br />
<br />
We can see the color filter array on the image. You can download the full color mosaic image <a href="https://drive.google.com/file/d/0BxoRrNrmaYrIa1ZNYXAzNkpTcGc/edit?usp=sharing">here</a>.<br />
<br />
The demosaicing procedure I will show here is completely <i>ad hoc</i>, but at least it might give Photoshop owners some of the flavor of the process. You might want to review the article <a href="http://therefractedlight.blogspot.com/2014/03/cook-your-own-raw-files-part-2-some.html"><i>Cook Your Own Raw Files, Part 2: Some Notes on the Sensor and Demosaicing</i></a> for an overview of the process.<br />
<br />
The basic problem is this — at any given pixel location, we only have one color, and we have to estimate the other two colors, based on the colors found in surrounding pixels. So for each type of pixel in our color array, we need need two functions to get the color, which for our 2x2 matrix, gives us 8 functions. For this exercise, I'll use a bilinear function, which is pretty good although still being simple.<br />
<br />
Following is an illustration of bilinear demosaicing functions, which takes averages of all the adjacent surrounding pixel values to estimate full-color at each pixel.<br />
<br />
<a href="https://www.flickr.com/photos/msabeln/13708091953" title="Bilinear-demosaic-animation by Mark Scott Abeln, on Flickr"><img alt="Bilinear-demosaic-animation" height="198" src="https://farm4.staticflickr.com/3675/13708091953_403c5222af_o.gif" width="198" /></a><br />
<br />
In order to do demosaicing in Photoshop, we can use the obscure <i>Custom filter,</i> found in the menus under <span style="font-family: Arial, Helvetica, sans-serif;">Filter->Other->Custom…</span><span style="font-family: Courier New, Courier, monospace;"> </span><br />
<br />
We will have to use four custom filters for this task, corresponding to the four types of patterns seen in the above animation:<br />
<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj32g8Bn8hWuK4UrIFCmxWlpoAqyIhigIk3KZ-FA7sgpYBE2iAWIG41_qwaMWNjk0Pj7oNA5Mp9GFg1Wmr-EWSj7MO23UajaIe_RdA28AL-qOY2jMQGQeOnzrHqA3CbziAEOvazXGWFy00/s1600/Custom-filters-animation.gif"><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj32g8Bn8hWuK4UrIFCmxWlpoAqyIhigIk3KZ-FA7sgpYBE2iAWIG41_qwaMWNjk0Pj7oNA5Mp9GFg1Wmr-EWSj7MO23UajaIe_RdA28AL-qOY2jMQGQeOnzrHqA3CbziAEOvazXGWFy00/s1600/Custom-filters-animation.gif" height="217" width="500" /></a><br />
<br />
An explanation of the Custom Filter function can be found <a href="http://forensicphotoshop.blogspot.com/2008/01/custom-filters-explained.html">here</a>.<br />
<br />
We will now create a complex layered file, with a layer for each of our eight color estimates. The key to using this — since Custom Filter changes all pixels — is to use masking to limit our processing to solely red, green, and blue pixels where appropriate.<br />
<br />
Here are the layers in the file:<br />
<br />
<a href="https://www.flickr.com/photos/msabeln/13710113465" title="Demosaic layers by Mark Scott Abeln, on Flickr"><img alt="Demosaic layers" height="670" src="https://farm8.staticflickr.com/7336/13710113465_0626b24aa9_o.png" width="349" /></a><br />
<br />
Doing this right requires a bit of patience and diligence. I named the layers to help do this more accurately. All of these layers, at first, are simply duplicates of our color mosaic image. The top layer is the color mosaic array, created using custom patterns — we will use this to create our masks.<br />
<br />
The next layer — <span style="font-family: Arial, Helvetica, sans-serif;">Green on Red - 3x3 edges</span> — is processed like this:<br />
<ul><li>We are estimating green color values for our red pixels; using the Layer Style (the box is opened by double-clicking the layer) we restrict this layer to only the green channel — <span style="font-family: Arial, Helvetica, sans-serif;">G</span> is selected in the Layer Style box. </li>
<li>Likewise, for all the rest of the layers, we restrict the color channel to whatever color is being estimated: <span style="font-family: Arial, Helvetica, sans-serif;">B</span> for <span style="font-family: Arial, Helvetica, sans-serif;">Blue on Red</span>, <span style="font-family: Arial, Helvetica, sans-serif;">G</span>, for <span style="font-family: Arial, Helvetica, sans-serif;">Green on Blue</span>, etc.</li>
<li>The masks, however, correspond to the color in the mosaic. For <span style="font-family: Arial, Helvetica, sans-serif;">Green on Red</span>, we are restricting processing to only the red pixels, giving them a green value in addition to red.</li>
<li>The <span style="font-family: Arial, Helvetica, sans-serif;">CFA</span> layer is useful for creating these masks; for example, for the <span style="font-family: Arial, Helvetica, sans-serif;">Green on Red </span>layer, I used <span style="font-family: Arial, Helvetica, sans-serif;">Image</span>-><span style="font-family: Arial, Helvetica, sans-serif;">Apply Image</span><span style="font-family: inherit;">, selecting the Red channel of the </span><span style="font-family: Arial, Helvetica, sans-serif;">CFA</span><span style="font-family: inherit;"> layer, applying it to the layer mask. This gives us a mask where only the red matrix colors are visible.</span></li>
<li><span style="font-family: inherit;">If you zoom way into the image, so far nothing has changed; but the custom filter will alter the image. I used the custom filter indicated by the layer name. For the </span><span style="font-family: Arial, Helvetica, sans-serif;">Green on Red</span><span style="font-family: inherit;"> layer, I used the </span><span style="font-family: Arial, Helvetica, sans-serif;">3x3 edges</span><span style="font-family: inherit;"> filter </span>— which averages the four green pixels found on the edges of the red pixels, and then assigns that average to the green channel of the red pixel. These custom filters can be found <a href="https://drive.google.com/folderview?id=0BxoRrNrmaYrITUR4VDF6YkZ0NWc&usp=sharing">here</a>. </li>
<li>The two green pixels are handled separately. What I do is use the Red or Blue channel from the <span style="font-family: Arial, Helvetica, sans-serif;">CFA</span> layer as a mask, and then shift it by one pixel according to the location, using <span style="font-family: Arial, Helvetica, sans-serif;">Filter</span>-><span style="font-family: Arial, Helvetica, sans-serif;">Other</span>-><span style="font-family: Arial, Helvetica, sans-serif;">Offset</span>; and I set it to <span style="font-family: Arial, Helvetica, sans-serif;">Wrap Around</span>.</li>
</ul>When setting up these layers, you might want to record an action so that you can do this repeatedly with little effort. Also remember that using the bilinear demosaicing algorithm will leave a border around the edges of the image.<br />
<div><br />
Once all the layers are set up, we have a nicely demosaiced image:<br />
<br />
<a href="https://www.flickr.com/photos/msabeln/13711028383" title="DSC_5492 - demosaiced by Mark Scott Abeln, on Flickr"><img alt="DSC_5492 - demosaiced" height="329" src="https://farm8.staticflickr.com/7063/13711028383_99e8d217d8.jpg" width="500" /></a><br />
<br />
You can download the 16-bit file <a href="https://drive.google.com/file/d/0BxoRrNrmaYrINkw2LXE0UFc2VlU/edit?usp=sharing">here</a>. A version of the file, with all of the layers, can be found <a href="https://drive.google.com/file/d/0BxoRrNrmaYrIcjYxZTJhaU9nTzg/edit?usp=sharing">here</a>; beware, however, it is over half a gigabyte in size.<br />
<br />
Looking more closely:<br />
<br />
<a href="https://www.flickr.com/photos/msabeln/13711028953" title="DSC_5492 - demosaiced - detail by Mark Scott Abeln, on Flickr"><img alt="DSC_5492 - demosaiced - detail" height="750" src="https://farm8.staticflickr.com/7365/13711028953_23f21a589c_o.png" width="500" /></a><br />
<br />
It appears that the demosaicing process was pretty clean; there might be some color fringing here, but I think most of it is <a href="http://en.wikipedia.org/wiki/Chromatic_aberration">chromatic aberration</a> from the optics.<br />
<br />
The next process is removing this color cast by doing a white balance. However, just because we are logically doing demosaicing first, this does not mean that this is optimal for getting good image quality — perhaps we might want to do white balance before demosaicing.<br />
<br />
Obviously, it is a bit silly doing demosaicing in Photoshop — but it does work in this case — although a general-purpose programming language would be better.<br />
<br />
Other articles in the series are:<br />
<br />
<a href="http://therefractedlight.blogspot.com/2014/02/cook-your-own-raw-files-part-1.html">Cook Your Own Raw Files, Part 1: Introduction</a><br />
<a href="http://therefractedlight.blogspot.com/2014/03/cook-your-own-raw-files-part-2-some.html">Cook Your Own Raw Files, Part 2: Some Notes on the Sensor and Demosaicing</a><br />
<br />
</div>Mark S. Abelnhttp://www.blogger.com/profile/06692448528819277158noreply@blogger.com1tag:blogger.com,1999:blog-8768375296475349032.post-74151102277401269772014-03-21T03:43:00.000-05:002016-06-01T20:36:09.357-05:00Why are blue skies noisy in digital photos?<span style="font-size: x-large;">A PHOTOGRAPHER ASKS,</span> “Why are blue skies so noisy in photos?”<br />
<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhHjAH7LNLgUCDHfknarvgrgwUycScGh1vDrQLUWcdQC75rx_4qOqgBhf6rkx8_YGhBgQRAIbW0qT5r1DC10qQXOrl9z3wBoZReXh6beyXrjny0vh4Oh6XB3ajIHnP1Etk-ur5YVZFdg9s/s1600/blue+sky+sample.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhHjAH7LNLgUCDHfknarvgrgwUycScGh1vDrQLUWcdQC75rx_4qOqgBhf6rkx8_YGhBgQRAIbW0qT5r1DC10qQXOrl9z3wBoZReXh6beyXrjny0vh4Oh6XB3ajIHnP1Etk-ur5YVZFdg9s/s1600/blue+sky+sample.jpg"></a><br />
<br />
<i>A noisy blue sky in a photo, greatly enlarged.</i><br />
<br />
This is a common question. Here are the issues, as far as I can tell:<br />
<br />
Skies are blue because of the process of <a href="http://en.wikipedia.org/wiki/Rayleigh_scattering">Rayleigh scattering</a>, where light is diffracted around the molecules of air. The higher the frequency of light, the more it is scattered: so when you photograph a blue sky, the camera’s blue color channel will be brighter than the green, and the green will be brighter than the red channel. This also explains the orange color of sunsets — when looking directly at the sun, you are mainly seeing the light which <i>hasn’t</i> been scattered, which is primarily the red along with some green, giving us orange colors. On the other hand, dust and water vapor in the sky will tend to scatter all frequencies of light, desaturating the blue color given us by Rayleigh scattering. I ought to note that overcast or hazy skies do not have a noise problem.<br />
<br />
We tend to notice noise more in uniform regions, such as blue skies. The more uniform a perception is, the more sensitive we are to subtle differences in that perception. The same absolute amount of noise in a complex, heavily textured scene will be less noticeable.<br />
<br />
Granted that there is some noise in the sky already for whatever reason, be aware that using the common <a href=“http://en.wikipedia.org/wiki/JPEG">JPEG</a> file format — which is used for most photos on the Internet — can generate additional noise due to its compression artifacts — which are blocky 8x8 pixel patterns. Again these will be more visible in areas of uniform color. The greater the compression amount, the more visible the blocky patterns. JPEG can also optionally discard more color information, leading to even more noise.<br />
<br />
The color of a blue sky can often be close to or outside of the range or gamut of the standard <a href="http://en.wikipedia.org/wiki/SRGB">sRGB</a> and Adobe RGB color spaces — the result of this is that the red color channel will be quite dark and noisy — unless you <a href="http://therefractedlight.blogspot.com/2010/06/three-opportunities-for-overexposure.html">overexpose</a> the sky, making it a bright, textureless cyan or white. This is most obvious with brilliant, clear, and clean blue skies, such as found in winter, at high latitudes and altitudes, and when using a polarizer. At dusk, the problem is probably worse.<br />
<br />
Depending on the camera and <a href="http://therefractedlight.blogspot.com/2011/01/white-balance-part-1.html">white balance settings</a>, the red color channel will be amplified greatly, increasing its noise greatly, and we already know that there will likely be significant noise in the red channel already, so this just makes things worse. Also, the blue color channel might be amplified also, increasing its noise. Also consider that most cameras have <a href="http://therefractedlight.blogspot.com/2014/03/cook-your-own-raw-files-part-2-some.html">double the number of green-sensitive sensels</a> compared to the red or blue variety, leading to more noise in those color channels.<br />
<br />
Human vision is sensitive to changes in the blue color range. Small changes in the RGB numbers in this color range are going to have a larger visual sensation than with some other colors. So a relatively small amount of noise will be more visible in the color of a blue sky.<br />
<br />
In order to create a really clean image from a camera’s raw data, high mathematical precision in the calculations is needed, as well as the ability to accept negative or excessive values of color, temporarily, during processing, which is called “<a href="http://www.littlecms.com/CIC18_UnboundedCMM.pdf">unbounded mode</a>” calculations. Now this can make raw conversion quite slow, and so many manufacturers take shortcuts, aiming for images that are “good enough” instead of being precisely accurate. But the result of using imprecise arithmetic is extra noise, along with possibly other digital artifacts.<br />
<br />
So the problem of blue sky noise is a nice mixture of physics, mathematics, human physiology and psychology, technical standards, and camera engineering.Mark S. Abelnhttp://www.blogger.com/profile/06692448528819277158noreply@blogger.com1tag:blogger.com,1999:blog-8768375296475349032.post-45944554295024112542014-03-08T20:46:00.000-06:002014-03-08T20:46:06.820-06:00On the Invention of Photography<span style="font-size: x-large;">IN THE EIGHTEENTH CENTURY,</span> it became fashionable to tour the English countryside, visiting castles, ruined abbeys, and picturesque landscapes. Those who had the ability would often sketch the vistas as a memento.<br />
<br />
<a href="http://en.wikipedia.org/wiki/William_Gilpin_(priest)">William Gilpin</a>, in his <a href="http://books.google.com/books?id=5kwJAAAAQAAJ">essays on the picturesque</a>, defined a <a href="http://en.wikipedia.org/wiki/Picturesque">picturesque</a> landscape as one which was a good subject for a drawing or a painting — and so his observations are relevant to contemporary photographers.<br />
<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEioqqOXgxAvMuErI4IfoQy876sOT1FBIVDSkvWZgATqN6YDDqs4X6GLfFQQMfBc4ol2fdEN00h5Vqj4GJgfWI3B8lvDBhmXR06Z6bNAqtemZ3a7M-x0WDQzNv_m-zYRkTvcD-tSOj7gdKc/s1600/Picturesque+landscape+illustration+from+William+Gilpin's+Three+Essays+on+the+Picturesque.jpg"imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEioqqOXgxAvMuErI4IfoQy876sOT1FBIVDSkvWZgATqN6YDDqs4X6GLfFQQMfBc4ol2fdEN00h5Vqj4GJgfWI3B8lvDBhmXR06Z6bNAqtemZ3a7M-x0WDQzNv_m-zYRkTvcD-tSOj7gdKc/s1600/Picturesque+landscape+illustration+from+William+Gilpin's+Three+Essays+on+the+Picturesque.jpg" height="323" width="500" /></a><br />
<br />
Illustration from Gilpin’s <a href="http://books.google.com/books?id=5kwJAAAAQAAJ"><i>Three Essays: On Picturesque Beauty; On Picturesque Travel; and On Sketching Landscape: to which is Added a Poem, On Landscape Painting</i></a>.<br />
<br />
But not everyone is trained in drawing. As picturesque travel became more popular, new inventions such as the <a href="http://en.wikipedia.org/wiki/Camera_lucida">camera lucida</a> helped novices sketch a landscape with more accuracy, while the <a href="http://en.wikipedia.org/wiki/Claude_Glass">Claude glass</a> darkened and abstracted the scene, giving it a more painterly quality.<br />
<br />
W. Henry Fox Talbot was in Italy on such a picturesque tour, and here he describes his experiences:<br />
<blockquote>One of the first days of the month of October 1833, I was amusing myself on the lovely shores of the Lake of Como, in Italy, taking sketches with Wollaston's Camera Lucida, or rather I should say, attempting to take them: but with the smallest possible amount of success. For when the eye was removed from the prism—in which all looked beautiful—I found that the faithless pencil had only left traces on the paper melancholy to behold.<br />
<br />
After various fruitless attempts, I laid aside the instrument and came to the conclusion, that its use required a previous knowledge of drawing, which unfortunately I did not possess.<br />
<br />
I then thought of trying again a method which I had tried many years before. This method was, to take a <a href="http://en.wikipedia.org/wiki/Camera_obscura">Camera Obscura</a>, and to throw the image of the objects on a piece of transparent tracing paper laid on a pane of glass in the focus of the instrument. On this paper the objects are distinctly seen, and can be traced on it with a pencil with some degree of accuracy, though not without much time and trouble.<br />
<br />
I had tried this simple method during former visits to Italy in 1823 and 1824, but found it in practice somewhat difficult to manage, because the pressure of the hand and pencil upon the paper tends to shake and displace the instrument (insecurely fixed, in all probability, while taking a hasty sketch by a roadside, or out of an inn window); and if the instrument is once deranged, it is most difficult to get it back again, so as to point truly in its former direction.<br />
<br />
Besides which, there is another objection, namely, that it baffles the skill and patience of the amateur to trace all the minute details visible on the paper; so that, in fact, he carries away with him little beyond a mere souvenir of the scene—which, however, certainly has its value when looked back to, in long after years.<br />
<br />
Such, then, was the method which I proposed to try again, and to endeavour, as before, to trace with my pencil the outlines of the scenery depicted on the paper. And this led me to reflect on the inimitable beauty of the pictures of nature's painting which the glass lens of the Camera throws upon the paper in its focus—fairy pictures, creations of a moment, and destined as rapidly to fade away.<br />
<br />
It was during these thoughts that the idea occurred to me…how charming it would be if it were possible to cause these natural images to imprint themselves durably, and remain fixed upon the paper!<br />
<br />
— from <a href="http://www.gutenberg.org/files/33447/33447-h/33447-h.html"><i>The Pencil of Nature</i></a>, by <a href="http://en.wikipedia.org/wiki/Henry_Fox_Talbot">William Henry Fox Talbot</a></blockquote>Talbot is credited as one of the inventors of photography.Mark S. Abelnhttp://www.blogger.com/profile/06692448528819277158noreply@blogger.com0tag:blogger.com,1999:blog-8768375296475349032.post-65219371055010011932014-03-08T19:27:00.003-06:002014-04-12T16:29:46.945-05:00Cook Your Own Raw Files, Part 2: Some Notes on the Sensor and Demosaicing<span style="font-size: x-large;" x-large="">ADMITTEDLY, DIGITAL</span> cameras are somewhat difficult to characterize well, because there is so much variety between models, but there are a few simple measures to help. Likely, most digital camera users are familiar with the term <i>megapixels</i> — perhaps with the vague understanding that more is better, but like many things, there is some sort of trade-off. Unfortunately, a particular megapixel value is usually hard to directly compare to another, simply because there are other factors we have to consider, like the sharpness of lenses used, the size of the sensor itself, any modifications made to the sensor, the processing done on the raw image data by the camera’s embedded computer, and a multitude of other factors. A 12 megapixel camera might very well produce sharper, cleaner images than a 16 megapixel camera.<br />
<blockquote class="tr_bq"><span style="color: #444444;">This is the second article in a series; the first article is here:</span><br />
<a href="http://therefractedlight.blogspot.com/2014/02/cook-your-own-raw-files-part-1.html">Cook Your Own Raw Files, Part 1: Introduction</a></blockquote>It is important to know is that there are a <i>fixed</i> number of light-recording sites on a typical digital camera sensor — millions of them — which is why we talk about megapixels. But consider that for many uses, such large numbers of pixels really aren’t needed:<br />
<br />
<a href="http://www.flickr.com/photos/msabeln/12981872264/" title="Building in Soulard by msabeln, on Flickr"><img alt="Building in Soulard" src="http://farm8.staticflickr.com/7304/12981872264_8e20ebbd73_o.jpg" height="332" width="500" /></a><br />
<br />
<i>A building in the Soulard neighborhood, in Saint Louis, Missouri, USA.</i><br />
<br />
By modern standards, this is a very small image. It is 500 pixels across by 332 pixels in height; and if we multiply them together, 500 x 332 = 166,000 total pixels. If we divide that by one million, we get a tiny 0.166 megapixels, a mere 1% or so of the total number of pixels that might be found in a contemporary camera. Don’t get me wrong — all the extra megapixels in my camera did find a good use, for when I made this image by downsampling, some of the impression of sharpness of the original image data did eventually find its way into this tiny image — if I only had a 0.166 megapixel camera, the final results would have been softer.<br />
<br />
OK, it is very important to know that a digital image has ultimately a fixed pixel size, and if we enlarge it, we don’t get any more real detail out of it, and if we reduce it, we will we lose detail. Many beginners, if they get Photoshop or some other image editing software, will get confused over image sizes and the notorious “pixels per inch” setting, as we see in this Photoshop dialog box:<br />
<br />
<a href="http://www.flickr.com/photos/msabeln/12983178805/" title="Image Size Dialog Box by msabeln, on Flickr"><img alt="Image Size Dialog Box" src="http://farm4.staticflickr.com/3352/12983178805_b44249cc2a.jpg" height="245" width="500" /></a><br />
<br />
If you are displaying an image on the Internet, the “Resolution” or “Pixels/Inch” setting is meaningless because the image will display on the monitor at whatever the resolution of the display happens to be set at. Likewise, if you make a print, the Width and Height values you see here are likewise meaningless, for the dimensions of the printed image will be whatever size the image is printed at — and not these values. <i>But</i> — if you multiply the Pixels/Inch value by the Width or Height, you will get the actual pixel dimensions of the image.<br />
<br />
The really important value then is the pixel dimensions: my camera delivers 4928 pixels across and 3264 pixels in the vertical dimension. I can resample the image to have larger pixel dimensions, but all those extra pixels will be padding, or interpolated data, and so I won’t see any new real detail. I can resample the image smaller, but I’ll be throwing away detail.<br />
<br />
<b>Sensels and Pixels</b><br />
<br />
So, we might assume that my camera’s sensor has 4,928 pixels across and 3,264 pixels vertically — well, it actually has more than that, for a reason we’ll get to later. But in another sense, we can say that the camera has <i>fewer</i> pixels than that, if we define a pixel as having full color information. My camera then does not capture whole pixels, but only a series of partial pixels.<br />
<br />
It would be more correct to say that a 16 megapixel camera actually has 16 million <i>sensels</i>, where each sensel captures only a narrow range of light frequencies. In most digital cameras, we have three kinds of sensels, each of which only captures one particular color.<br />
<br />
The image sensor of my particular model of Nikon has a vast array of sensels — a quarter of which are only sensitive to red light, another quarter sensitive to blue light, and half sensitive to green light, arrayed in a pattern somewhat like this one:<br />
<br />
<a href="http://www.flickr.com/photos/msabeln/12984532283/" title="Array by msabeln, on Flickr"><img alt="Array" src="http://farm4.staticflickr.com/3482/12984532283_2f258239c9_o.png" height="500" width="500" /></a><br />
<br />
Since I was unable to find a good photomicrograph of an actual camera sensor, this will have to do for our needs — but this abstract representation doesn’t really show us the complexity of sensors.<br />
<br />
Not only is the sensor most sensitive to a bright yellowish-green light (as far as I can tell), it has less sensitivity to blue, and is less sensitive to red: this is on top of the fact that we have as many green sensels as we have red and blue combined. Please be aware that the camera can only deliver these colors (in a variety of brightnesses), and so somehow we are going to have to find some means of combining these colors together in order to get a full gamut of color in our final image.<br />
<br />
We have information on only one color at each sensel location. If we want to deliver a full pixel of color at each location, we need to use an interpolation algorithm — a method of estimating the full color at any particular sensel location based on the surrounding sensels. This process of estimating colors is called demosaicing, interpolation, or debayering — converting the mosaic of colors to a continuous image.<br />
<br />
Camera raw files are of great interest because they record image data that have not yet been interpolated; you can, after the fact on your computer, use updated software that might include a better interpolation algorithm, and so you can possibly get better images today from the same raw file than what you could have gotten years ago.<br />
<br />
Please be aware that the kind of pattern above — called a <a href="http://en.wikipedia.org/wiki/Bayer_filter">Bayer filter</a>, after the Eastman Kodak inventor Bryce Bayer (1929–2012) — is not the only one. Fujifilm has experimented with a variety of innovative patterns for its cameras, while the Leica Monochrom has no color pattern at all, since it shoots in black and white only. Also be aware that the colors that any given model of camera captures might be somewhat different from what I show here — even cameras that use the same basic sensor might use a different color filter array and so have different color rendering. Camera manufacturers will use different formulations of a color filter array to adjust color sensitivity, or to allow for good performance in low light.<br />
<br />
Sigma cameras with <a href="http://en.wikipedia.org/wiki/Foveon_X3_sensor">Foveon X3</a> sensors have no pattern, since they stack three different colors of sensels on top of each other, giving full-color information at each pixel location. While this may seem to be ideal, be aware that this design has its own problems.<br />
<br />
<b>A Clarification</b><br />
<br />
I am using the term ‘color’ loosely here. Be aware that many combination of light frequencies can produce what looks like — to the human eye — the same color. For example, a laser might produce a pure yellow light, but that color of yellow might look the same as a combination of red and green light. This is called <a href="http://en.wikipedia.org/wiki/Metamerism_(color)">metamerism</a>, and is a rather difficult problem. There are any number of formulations of color filters that can be used in a digital camera, and we can expect them to have varying absorbance of light — leading to different color renderings. For this reason, it is difficult to get truly accurate colors from a digital camera.<br />
<br />
<b>A Perfect Sample Image</b><br />
<br />
For my demonstrations of demosaicing, I’ll be using sections of this image (Copyright © 2001 by Bruce Justin Lindbloom; source <a href="http://www.brucelindbloom.com/">http://www.brucelindbloom.com</a>):<br />
<br />
<a href="http://www.flickr.com/photos/msabeln/12985355414/" title="DeltaE_8bit_gamma2.2 by msabeln, on Flickr"><img alt="DeltaE_8bit_gamma2.2" src="http://farm3.staticflickr.com/2638/12985355414_0eac69049a.jpg" height="333" width="500" /></a><br />
<br />
Mr. Lindbloom writes:<br />
<blockquote>In the interest of digital imaging research, I am providing a set of four images that represent "perfect" images, that is, they represent a natural scene (as opposed to say, a test pattern or a gradient) which is completely void of any noise, aliasing or other image artifacts. They were taken with a virtual, six mega-pixel camera using a ray tracing program I wrote myself. The intensity of each pixel was computed in double precision floating point and then companded and quantized to 8- or 16-bits per channel at the last possible moment before writing the image file. The four variations represent all combinations of 8- or 16-bits per channel and gamma of 1.0 or 2.2. I believe these images will be useful for research purposes in answering such questions as "How many bits are needed to avoid visual defects?" and "How does one determine the number of bits of real image information, as opposed to digitized noise?" In this sense, they may provide ideal image references against which actual digitized images may be compared by various visual or statistical analysis techniques.</blockquote>No camera — at the same resolution — will produce images as sharp as this, nor will any camera produce colors as accurate as this image, and no camera image will be without noise. Using perfect images for our purposes has the benefit that it will make defects more visible; real cameras will have less pristine images.<br />
<br />
I am using two small crops of this image, 100x150 pixels in size each, which I show here enlarged 500%:<br />
<br />
<a href="http://www.flickr.com/photos/msabeln/12985568364/" title="Sample 1 by msabeln, on Flickr"><img alt="Sample 1" src="http://farm4.staticflickr.com/3715/12985568364_d7b81b807b_o.png" height="750" width="500" /></a><br />
<br />
<a href="http://www.flickr.com/photos/msabeln/12985153215/" title="Sample 2 by msabeln, on Flickr"><img alt="Sample 2" src="http://farm8.staticflickr.com/7336/12985153215_278f0dc513_o.png" height="750" width="500" /></a><br />
<br />
<b>Applying the Mosaic</b><br />
<br />
The sample image has brilliant, saturated colors of impossible clarity. But what happens if we pretend that this image was taken with a camera with a Bayer filter? Here I combine the image crop with a red-green-blue array similar to the one shown above:<br />
<br />
<a href="http://www.flickr.com/photos/msabeln/12985751554/" title="Sample 1 mosaic by msabeln, on Flickr"><img alt="Sample 1 mosaic" src="http://farm8.staticflickr.com/7320/12985751554_acd0ba462a_o.png" height="750" width="500" /></a><br />
<br />
It looks rather bad, and recovering something close to the original colors seems to be hopeless. What is worse, we apparently have lost all of the subtle color variations seen in the original image. If we take a closer look at this image:<br />
<br />
<a href="http://www.flickr.com/photos/msabeln/12985862584/" title="Sample 1 mosaic magnified by msabeln, on Flickr"><img alt="Sample 1 mosaic magnified" src="http://farm4.staticflickr.com/3768/12985862584_1844cf7118_o.png" height="740" width="500" /></a><br />
<br />
We see that we have only three colors [technically the red, green, and blue primary colors of the <a href="http://en.wikipedia.org/wiki/SRGB">sRGB</a> standard] with only variation in brightness. We seemed to have lost much of the color of our original image.<br />
<br />
But this is precisely what happens with a digital camera — all the richness and variety of all the colors of the world get reduced down to only three colors. However, all is not lost — if we intelligently select three distinct primary colors, we can reconstruct <i>all</i> of the colors that lay between them by specifying varying quantities of each primary. <i>This is the foundation of the RGB color system. </i>Please take a look at these articles:<br />
<ul><li><a href="http://therefractedlight.blogspot.com/2010/08/color-spaces-part-1-rgb.html">Color Spaces, Part 1: RGB</a></li>
<li><a href="http://therefractedlight.blogspot.com/2010/08/rgb-quiz.html">An RGB Quiz</a></li>
</ul><b>Removing the Matrix</b><br />
<br />
Now we have lost color information because of the Bayer filter, and for most digital cameras we simply have to accept that fact and do the best we can to estimate the color information that has been lost. Since each pixel or rather sensel only delivers one color — and we need three for full color — we can have to estimate the colors by looking at the neighboring sensels and making some assumptions about the image. Red sensels need green and blue color data, green sensels lack red and blue data, and blue sensels need red and green.<br />
<br />
A very simple method for doing this is the Nearest Neighbor interpolation algorithm, where we grab adjacent sensel colors and use those to estimate the full color of the image. Here is an illustration of a nearest neighbor algorithm:<br />
<br />
<a href="http://www.flickr.com/photos/msabeln/12994324844/" title="Example of Nearest Neighbor by msabeln, on Flickr"><img alt="Example of Nearest Neighbor" src="http://farm8.staticflickr.com/7306/12994324844_3a4acbc4c4_o.png" height="725" width="466" /></a><br />
Take a look at the three original color channels — in the red, we only have data from the red sensel, and the rest are black — and so we copy that red value to the adjoining sensels. Since we have two different green sensels, here we split the difference between them and copy the resulting average to the red and blue sensels. We end up with a variety of colors when we are finished. Now there are a number of ways we can implement a nearest neighbor algorithm, and these depend on the arrangement of the colors on the sensor, and each one will produce somewhat different results.<br />
<br />
Here we apply a nearest neighbor algorithm to our sample images:<br />
<br />
<a href="http://www.flickr.com/photos/msabeln/12994579844/" title="Sample 1 - Nearest Neighbor by msabeln, on Flickr"><img alt="Sample 1 - Nearest Neighbor" src="http://farm3.staticflickr.com/2444/12994579844_68ec59ce2f_o.png" height="750" width="500" /></a><br />
<br />
<a href="http://www.flickr.com/photos/msabeln/12994579904/" title="Sample 2 - Nearest Neighbor by msabeln, on Flickr"><img alt="Sample 2 - Nearest Neighbor" src="http://farm3.staticflickr.com/2132/12994579904_e0ed8e6660_o.png" height="750" width="500" /></a><br />
<br />
OK, it is apparent that we can reproduce areas of uniform color well, giving us back the colors of the original image. However, edges are a mess. Since the algorithm used has no idea that the lines on the second image are supposed to be gray, it gives us color artifacts. Generally, all edges are rough. Also notice that there is a bias towards one color on one side of an object, and a bias towards another color on the other side of the same object — in the samples, the orange patches have green artifacts on its top and left edges, and red artifacts on its bottom and right edges. This bias makes sense since we are only copying color data from one side of each sensel.<br />
<br />
We can eliminate this bias if we replace the Nearest Neighbor algorithm with something that is symmetric. A Bilinear algorithm will examine the adjacent colors on all sides of each sensel, getting rid of the bias. Our sample images here are demosaiced with a bilinear algorithm:<br />
<br />
<a href="http://www.flickr.com/photos/msabeln/12995098634/" title="Sample 1 - Bilinear by msabeln, on Flickr"><img alt="Sample 1 - Bilinear" src="http://farm3.staticflickr.com/2695/12995098634_54821f2523_o.png" height="750" width="500" /></a><br />
<br />
<a href="http://www.flickr.com/photos/msabeln/12994832303/" title="Sample 2 - Bilinear by msabeln, on Flickr"><img alt="Sample 2 - Bilinear" src="http://farm4.staticflickr.com/3504/12994832303_cf648a9eb1_o.png" height="750" width="500" /></a><br />
<br />
OK, a bilinear algorithm eliminates the directional bias of color artifacts, which is good. While the edges are still rough, they do seem a bit softer — which makes sense, since we are taking data from a wider range of sensels, which in effect blurs the image a bit.<br />
<br />
Demosaicing algorithms all assume that colors of adjacent pixels are going to be more similar to each other than they are different — and if we cannot make this assumption, then we can’t do demosaicing. Nearest neighbor algorithms assume that all colors in a 2x2 block are basically identical, while the bilinear algorithm assumes that colors change uniformly in a linear fashion. If we sample sensels farther away, we can assume more complicated relationships, such as a power series, and this assumption is built into the bicubic algorithm, which produces smoother results than those illustrated.<br />
<br />
More complex algorithms will give us better gradations and smoothness in color, but have the side-effect of softening edges, and so there is research in ways of discovering edges to handle them separately, by forcing colors along an edge to one value or another. Some algorithms are optimized for scanning texts, while others are better for natural scenes taken with a camera. Be aware that noise will change the results also, and so there are some algorithms that are more resistant to noise, but may not produce sharp results with clean images.<br />
<br />
As high frame rates are often desired for cameras, complex algorithms for producing JPEGs may not be desired, simply because it will take much longer to process each image — however, this is less of a problem with raw converters on computers, since we can assume that a slight or even long delay is more acceptable.<br />
<br />
Notice that our bilinear images have a border around them. Because the bilinear algorithm takes data from all around each sensel, we don’t have complete data for the sensels on the edges, and so there will be a discontinuity on the border of the image. Because of this, cameras may have slightly more sensels than what is actually delivered in a final JPEG — edges are cropped.<br />
<br />
We ought not assume that a camera with X megapixels needs to always deliver an image at that size: perhaps it makes sense to deliver a smaller image? For example, we can collapse each 2x2 square of sensels to one pixel, producing an image with half the resolution and one quarter the size, and possibly with fewer color artifacts. Some newer smartphone cameras routinely use this kind of processing to produce superior images from small, noisy high-megapixel sensors. This is a field of active research.<br />
<br />
<b>Antialias</b><br />
<br />
<i>Don’t pixel peep! Don’t zoom way into your images to see defects in your lens, focus, or demosaicing algorithms! Be </i>satisfied<i> with your final images on the computer screen, and if you make a large print, don’t stand up close to it looking for defects. Stand back at a comfortable distance, and enjoy your image.</i><br />
<br />
Perhaps this is wise advice. Don’t agonize over tiny details which will never be seen by anyone who isn’t an obsessive photographer. This is especially true when we routinely have cameras with huge megapixel values — and never use all those pixels in ordinary work.<br />
<br />
But impressive specifications can help you even if you never use technology to its fullest. If you want a car with good acceleration at legal highway speeds, you will need a car that can go much faster — even if you never drive over the speed limit. If you don’t want a bridge to collapse, build it to accept far more weight than it will ever experience. Most lenses are much sharper when stopped down by one or two stops: and so, if you want a good sharp lens at f/2.8, you will likely need to use an f/1.4 lens. A camera that is tolerable at ISO 6400 will likely be excellent at ISO 1600. If you want exceptionally sharp and clean 10 megapixel images, you might need a camera that has 24 megapixels.<br />
<br />
OK, so then let’s consider the rated megapixel count of a camera as a safety factor, or as an engineering design feature intended to deliver excellent results at a lower final resolution. Under typical operating conditions, you won’t use all of your megapixels. As we can probably assume, better demosaicing algorithms might produce softer, but cleaner results, and that is perfectly acceptable. Now perhaps those color fringes around edges are rarely visible — although resampling algorithms might show them in places, and so getting rid of them ought to be at least somewhat important.<br />
<br />
My attempts to blur the color defects around edges was not fruitful. In Photoshop, I attempted to use a half dozen methods of chroma blur and noise reduction, and either they didn’t work, or they produced artifacts that were more objectionable than the original flaws. The problem is, once the camera captures the image,<i> the damage is already done</i>: the camera does in fact capture different colors on opposite sides of a sharp edge — a red sensel might be on one side of an edge, and the blue sensel captures light from the other side.<br />
<br />
In order to overcome unpleasant digital artifacts, most cameras incorporate antialias filters: they blur the image a bit so that light coming into the camera is split so that it goes to more than one sensel. This process <i>can only be done at the time of image capture, it cannot be duplicated in software after the fact.</i> Since I am working with a perfect synthetic image, I can simulate the effect of an antialias filter before I impose a mosaic.<br />
<br />
Here I used box blur, at 75% opacity on the original image data before applying the mosaic: demosaicing leads to a rather clean looking image:<br />
<br />
<br />
<a href="http://www.flickr.com/photos/msabeln/12998759955/" title="Sample 1 - bilinear - antialias by msabeln, on Flickr"><img alt="Sample 1 - bilinear - antialias" src="http://farm8.staticflickr.com/7331/12998759955_b9be9cc5b3_o.png" height="750" width="500" /></a><br />
<br />
<a href="http://www.flickr.com/photos/msabeln/12998759725/" title="Sample 2 - bilinear - antialias by msabeln, on Flickr"><img alt="Sample 2 - bilinear - antialias" src="http://farm4.staticflickr.com/3265/12998759725_cdfbf1bcde_o.png" height="750" width="500" /></a><br />
<br />
Now I could have used a stronger blurring to get rid of more color, but I think this is perfectly adequate. What little color noise that still remains can be easily cleaned up by doing a very slight amount of chroma noise reduction — but it is hardly needed. If I would have used stronger blurring, the residual noise would have been nearly non-existent, but it would also have made the image softer. Note that we still have the border problem, easily fixed by cropping out those pixels.<br />
<br />
It is typically said that the purpose of the <a href="http://en.wikipedia.org/wiki/Anti-aliasing_filter">antialias filter</a> is to avoid moiré patterns, as illustrated in the article <a href="http://therefractedlight.blogspot.com/2010/12/problem-of-resizing-images.html">The Problem of Resizing Images</a>. Understand that moiré patterns are a type of <a href="http://en.wikipedia.org/wiki/Aliasing">aliasing</a> — in any kind of digital sampling of an analog signal, such as sound or light, if the original analog signal isn’t processed well, the resulting digital recording or image may have an unnatural and unpleasant roughness when heard or seen. From my experience, I think that avoiding the generation of digital color noise defects due to the Bayer filter is a stronger or more common reason for using an antialias filter. But of course, both moiré and demosaicing noise are examples of aliasing in general, so solving one problem solves the other.<br />
<br />
Producing a good digital representation of an analog signal requires two steps: blurring the original <i>analog</i> signal a bit, followed by <a href="http://en.wikipedia.org/wiki/Oversampling">oversampling</a> — collecting more detailed data than you might think you actually need. In digital audio processing, the original analog sound is sent through a low-pass filter, eliminating ultrasonic frequencies, and the resulting signal is sampled at frequency more than double the highest frequency that can be heard by young, healthy ears — but this is not wasteful but rather essential for high-quality digital sound reproduction. Likewise, high megapixel cameras — often considered wasteful or overkill — ought to be seen as digital devices which properly implement oversampling so as to get a clean final image.<br />
<br />
<b>Anti-antialias</b><br />
<b><br />
</b> There has been a trend in recent years of DSLR camera makers producing sensors without antialias filters. A common opinion of this practice is that it makes for sharper-looking images. Indeed they are sharper looking — compare our images that implement the nearest neighbor algorithm with those which are blurred at the bottom. But is that apparent sharpness due to aliasing — a defect of processing — and isn’t real detail at all?<br />
<br />
So in some respects, getting rid of an antialias filter ought to be considered a mistake. Perhaps Nikon realizes this, offering both the D800, which incorporates an antialias filter, and the D800E, which incorporates another filter which cancels out the effect of the antialias filter. But be aware that there is no way to <i>digitally</i> correct for the loss of the antialias blurring step of the original analog signal. Any attempts, after the fact, to correct for the digital aliasing flaws, will inevitably be worse than if the analog signal was blurred to begin with.<br />
<br />
However, practically speaking, it is extremely difficult to get a pixel-level sharp image from a high megapixel camera, especially when the camera is a small image format like DSLRs. Optics usually aren’t all that good and excellent optics are often rare. Also, camera shake, poor focus, narrow depth of field, and diffraction will all blur the light hitting the sensor, giving us the analog pre-blur that is needed to produce clean digital images. In this case, elimination of the antialias filter can actually provide truly sharper images — since we don’t want to further blur light that is already blurry enough to avoid artifacts.<br />
<br />
Be aware that construction of an antialias filter is rather problematic and they are not necessarily mathematically perfect for the purpose of pre-blurring the analog light signal for the Bayer filter. We do find a wide variation of anti-alias filters in various camera models, with some having the reputation of being too strong than needed.<br />
<br />
<b>Some Further Notes</b><br />
<b><br />
</b> A quick Web search for “demosaicing algorithms” will bring up many scholarly articles regarding the strengths and weaknesses of the various methods of interpolation, but these are rarely of interest to a photographer.<br />
<br />
What a photographer needs to know is that the process <i>does exist</i>, and that most of the time, the method used isn’t too particularly important. The results and artifacts produced by demosaicing only become critical to image quality when the photographer is trying to do something extreme — like making tiny crops of an image or producing huge images, where exceptional optics are used with refined technique: where there is a real need to produce the cleanest possible images. This might also be useful if you are using a low-resolution camera with sharp optics. Otherwise, for casual photography, the details of demosaicing are of little value. Sometimes we simply need to retouch out obvious defects, like blurring the color along particularly noxious edges.<br />
<br />
Most raw converter software does not give us any choice in demosaicing algorithm. But some that do include <a href="http://www.raw-photo-processor.com/">Raw Photo Processor</a>, <a href="http://www.cybercom.net/~dcoffin/dcraw">dcraw</a>, <a href="http://www.darktable.org/">darktable</a>, and <a href="http://rawtherapee.com/">RawTherapee</a>. The latter is interesting because you can quickly see the results of changing the algorithm used.<br />
<br />
Logically, I am presenting demosaicing as the first step in processing a raw file, but this is not necessarily the best thing to do — I am simply describing it now to get it out of the way. The <a href="http://www.guillermoluijk.com/tutorial/dcraw/index_en.htm">Dcraw tutorial</a> demonstrates that white balance ought to be done <i>before</i> demosaicing, otherwise additional color artifacts might be generated.<br />
<br />
Click here for the previous article in the series:<br />
<br />
<a href="http://therefractedlight.blogspot.com/2014/02/cook-your-own-raw-files-part-1.html">Cook Your Own Raw Files, Part 1: Introduction</a><br />
<br />
And the next article:<br />
<br />
<a href="http://therefractedlight.blogspot.com/2014/04/cook-your-own-raw-files-part-3-demosaic.html">Cook Your Own Raw Files, Part 3: Demosaic Your Images</a>Mark S. Abelnhttp://www.blogger.com/profile/06692448528819277158noreply@blogger.com0tag:blogger.com,1999:blog-8768375296475349032.post-43227509652517203142014-02-07T13:55:00.001-06:002014-04-06T00:09:37.157-05:00Cook Your Own Raw Files, Part 1: Introduction<blockquote></blockquote><span style="font-size: x-large;">“IT MUST BE </span>really simple. When a digital camera’s shutter button is pressed, the camera records the amount of light falling on it, which is written to an image file, which we then can display on a computer monitor or send to a printer.”<br />
<br />
Well, no, it isn't quite that simple — in fact, it is quite complex, for reasons including:<br />
<ul><li>The physics of light</li>
<li>The physiology of human vision</li>
<li>The psychology of color</li>
<li>How cameras sense light in general</li>
<li>How cameras sense color difference</li>
<li>Technical trade-offs in camera design</li>
<li>The efficiency of image data storage and computer processing of images</li>
<li>The industry standards that guide camera manufacturers</li>
</ul>For example, this nice image of a butterfly is typical of what we would expect from a digital camera:<br />
<br />
<a href="http://www.flickr.com/photos/msabeln/11949624346/" title="DSC_4160 by msabeln, on Flickr"><img alt="DSC_4160" src="http://farm8.staticflickr.com/7361/11949624346_d9223873ca.jpg" height="331" width="500" /></a><br />
<br />
The colors look plausible, we can see lots of highlight and shadow detail, and it is fairly crisp. But this is not how the camera records the image. What the camera records is rather something more like this:<br />
<br />
<a href="http://www.flickr.com/photos/msabeln/11949624896/" title="DSC_4160 - raw by msabeln, on Flickr"><img alt="DSC_4160 - raw" src="http://farm3.staticflickr.com/2885/11949624896_f0214582b7.jpg" height="332" width="500" /></a><br />
<br />
This is dark and green. Not at all what we would expect, based on the naïve idea which opened this article. If we were to open this image into Photoshop, adjust the brightness, and subtract out the green color, we still would not have good colors — they would rather be somewhat subdued and have the wrong hue — and the image would still appear to be a bit lackluster and soft. There would also be severe banding or <a href="http://en.wikipedia.org/wiki/Posterization">posterization</a> in the shadows — but that is a matter for later. However, an image file such as this will give you a good idea of the performance, characteristics, and the limits of your camera.<br />
<br />
The purpose of this proposed series of articles is to cover the basic steps that go on within cameras and raw processing software, and to do so, we are going to use some software that will emulate, at least partly, what happens in raw conversion, and I hope will explain it in an easy-to-understand fashion.<br />
<br />
Now this is entirely for educational purposes — what I present here is no substitute for actually using a raw processor — but I hope that my readers will find this interesting to know, and maybe learn a few things which might make them better photographers and retouchers.<br />
<br />
What we need for our experimentation is some computer software that will deliver the closest possible approximation of a camera’s raw data, which we can pull into Photoshop or some other image editing program for further investigation. I use <a href="http://www.raw-photo-processor.com/">Raw Photo Processor</a> — a free download — but this, unfortunately, is only for Macintosh computers, but the <span courier="" monospace="" new=""><a href="http://www.cybercom.net/~dcoffin/dcraw">dcraw</a></span> program can be used with most any computer.<br />
<br />
If you have a Mac, download RPP from <a href="http://www.raw-photo-processor.com/">http://www.raw-photo-processor.com</a>. There are a few settings in Raw Photo Processor needed to produce a raw-like image:<br />
<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhCiY7xUCxpdUdbeRRLtVRU-2JLyIOe09yl0lq8nU3IgyQlc77Bzw0WMjIBTRtHJ-VJbLDeWWB5TVVhTbf-EwLAVDbA9szpYFBhDAeKdqHCNXXFHGNDwQanMWRVB7U27xeM4iyaZ7ukKKw/s1600/RPP+settings.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhCiY7xUCxpdUdbeRRLtVRU-2JLyIOe09yl0lq8nU3IgyQlc77Bzw0WMjIBTRtHJ-VJbLDeWWB5TVVhTbf-EwLAVDbA9szpYFBhDAeKdqHCNXXFHGNDwQanMWRVB7U27xeM4iyaZ7ukKKw/s1600/RPP+settings.png" /></a><br />
<br />
If your imaging editor allows you to use a full 32-bit workflow, then you can set RPP to use <i>Raw RGB TIFF 32-bit</i>, which will give you better accuracy and less clipping. Photoshop has limited 32 functionality, but ought to be able to do most of the functions we plan to demonstrate.<br />
<br />
An alternative is <span courier="" ee="" font-family:="" monospace="" new="" style="font-family: Courier New, Courier, monospace;"><a href="http://www.cybercom.net/~dcoffin/dcraw">dcraw</a></span>, a simple command-line program, by author Dave Coffin, whose mission is to <i>“Write and maintain an ANSI C program that decodes any raw image from any digital camera on any computer running any operating system”</i> and this software is used in many raw converter packages. You can download the software from <a href="http://www.cybercom.net/~dcoffin/dcraw">http://www.cybercom.net/~dcoffin/dcraw</a>, and a good tutorial for using it is found <a href="http://www.guillermoluijk.com/tutorial/dcraw/index_en.htm">here</a>.<br />
<br />
This program operates off of the <a href="http://en.wikipedia.org/wiki/Command-line_interface">command line</a>, rather than the more common graphical interface. The command to produce a lightly-processed raw-like image file is this:<br />
<br />
<span style="font-family: Courier New, Courier, monospace;">dcraw -v -T -4 -o 0 198_0011.nef</span><br />
<br />
where you replace <span courier="" monospace="" new="" style="font-family: Courier New, Courier, monospace;">198_0011.nef</span> with the name of the raw file you want to process. This software will generate a TIFF file in the form <span style="font-family: Courier New, Courier, monospace;">198_0011.tiff</span>. See the <a href="http://www.cybercom.net/~dcoffin/dcraw/dcraw.1.html"><span style="font-family: 'Courier New', Courier, monospace;">dcraw</span> documentation</a> for the usage of the utility and its parameters. This command creates a 16 TIFF file equivalent to what is generated by Raw Photo Processor.<br />
<br />
I will demonstrate processing on Photoshop, although equivalent functionality for our purposes is available in other software applications.<br />
<br />
You might want to experiment with RPP or dcraw, in order to get a good idea of what kind of image data your camera actually delivers. Here are some things you might want to look at and try:<br />
<ul><li>When you attempt to open the image in your image editing software, do you get a warning that your image does not have a profile? If so, assign it sRGB; if not, your software may not be color managed or it may automatically assign the standard sRGB color space — but this is a topic for a later article.</li>
<li>What is the overall tonality of the image? How much highlight detail do you have, compared to mid tones and shadows?</li>
<li>Do you have a full range of tones from white to black, or are there dominant bright colors? Are the darkest tones black or a noticeably gray color? </li>
<li>Take a photo of a scene taken under dim incandescent lighting, and another taken outdoors in the shade, and compare the overall ‘look’ of the images. Try taking photos of the same object under different lighting.</li>
<li>Try taking photos of the same scene at both high and low ISO. Compare the output. </li>
<li>Play with Levels and Curves, and see if you can get a good white balance. How much adjustment is needed, and where? Try the automatic correction tools such as Photoshop’s Auto Tone or Auto Color.</li>
<li>How is the saturation of the image? If you do increase saturation, how good are the resulting colors?</li>
<li>Try applying a curve to brighten the image, or use some other technique to brighten the shadows. How well does this correct the tonality of the image? How does this change the detail found in the highlights?</li>
<li>How much noise is in the highlights compared to the shadows?</li>
</ul><div>One critical step in raw processing for most (but not all) digital cameras, <a href="http://en.wikipedia.org/wiki/Demosaicing">demosaicing</a>, will be considered in the next article in this series:<br />
<br />
<a href="http://therefractedlight.blogspot.com/2014/03/cook-your-own-raw-files-part-2-some.html">Cook Your Own Raw Files, Part 2: Some Notes on the Sensor and Demosaicing</a></div>Mark S. Abelnhttp://www.blogger.com/profile/06692448528819277158noreply@blogger.com2tag:blogger.com,1999:blog-8768375296475349032.post-66587817787747214262013-06-22T17:25:00.001-05:002013-07-17T01:31:54.743-05:00Wishes Granted!<span style="font-size: x-large;">A WHILE BACK,</span> I wrote two articles listing things that I’d like to see in some future version of Photoshop:<br />
<ul>
<li><a href="http://therefractedlight.blogspot.com/2010/11/photoshop-wishlist-1.html">Photoshop Wishlist #1</a></li>
<li><a href="http://therefractedlight.blogspot.com/2011/11/photoshop-wishlist.html">Photoshop Wishlist #2</a></li>
</ul>
Now some of the things I wrote are rather confused or unclear, but I did see a real need for them at the time, such as:<br />
<blockquote>
<span style="font-family: Verdana, sans-serif;">Each color channel has a maximum value of 255, a minimum value of 0, and we can use only integer steps between: 1, 2, 3, and so forth, with no intermediate values. This lack of precision is of little consequence to most users, and if you do need greater precision — for example, if you are applying severe curves to your image — then certainly you can use 16 bit mode (as I do) to increase the number of possible values. This extra precision helps avoid digital processing artifacts such as banding, and also lets you get better shadow detail…<br />
<br />
But that isn't good enough. I'd like to see fractional RGB numbers. I want RGB values greater than 255. I want negative RGB numbers. <i>But this is madness!</i> You cannot display an image with RGB values greater than 255! And what on earth are negative RGB values? Those are clearly impossible, there is <i>no such thing</i> as negative light!</span></blockquote>
OK, for a <i>final</i> image, it needs to be in some <i>specific</i>, bounded color space that can be reliably displayed or printed on various devices, and for the time being, the best color space for images is usually <a href="http://en.wikipedia.org/wiki/SRGB">sRGB</a>, at 8 bits per color channel, using the JPEG file format, since those standards are supported by nearly all computer monitors and desktop printers. But when you are processing an image, there very often is a temporary need for values that exceed the bounds of any particular color space:<br />
<blockquote>
<span style="font-family: Verdana, sans-serif;">For example, when I apply a severe curve to an image, anything that ought to go over 255 is set to 255, and so we lose information and image detail. However, if its value ought to be 300, I want it to be 300, even though it is out of the gamut for the time being. If I tell Photoshop to make an image twice as bright, I want the entire image to be twice as bright, without worrying about losing highlight detail. I will deal with the gamut when I need to deal with it, which is when I’m preparing the final image for print or web display…<br />
<br />
This brings us to negative RGB numbers. These in fact can represent real colors. For example, if you work in a narrow-gamut color space similar to sRGB, and you want to represent a real color outside of its gamut, you can mathematically represent this if you are willing to allow at least one RGB number which is negative or greater than 255. So a negative RGB does not mean negative light, but rather that it is merely an out-of-gamut condition. If we are allowed to use negative numbers — and numbers greater than 255 — then we will be able to represent all colors while still using a system that is otherwise identical to our narrow-gamut color system. This system will remain <i>relative</i> to a particular gamut, while not being <i>limited</i> to that gamut.</span></blockquote>
There were a number of other things that I wished for, including:<br />
<blockquote>
<span style="font-family: Verdana, sans-serif;">3. Photoshop does not resize images well, and often generates interference patterns. Lots of research has been done on these kinds of algorithms, and it would be good to see these better solutions in Photoshop…<br />
<br />
5. When you use curves in RGB, you can either do it with the Normal blending mode, which typically causes an increase in saturation, or you can do it with Luminosity blending, which decreases saturation. How about a simple method which does neither? I just want the tonality to change, not the basic coloration…<br />
<br />
8. I’ve noticed that there is a distinction between chroma, colorfulness, and saturation; not really sure how or what Photoshop does. A solid colorimetric model would be useful…</span></blockquote>
Now for the last month and a half or so, I’ve been working on my next book, which is due out this fall, and so I haven’t been posting here since I was rather busy. But during my photo processing, I’ve ran into problems related to all of these wishes mentioned above. I was working with scenes that had tremendous dynamic range, and so I took multiple exposures — sometimes five separate images or more — and had to blend them together so as to create an attractive and plausible final image. The software I was using to blend together these images often would over- or under-expose my images, or would often create odd shifts in color, such as turning reds to orange and blues to purple.<br />
<br />
I also had to add severe curves to images, do strong brightening of shadows and extreme pulling back of highlights, and all of these photos ultimately had to be converted to the <a href="http://therefractedlight.blogspot.com/2010/11/color-spaces-part-2-cmyk.html">CMYK</a> color system so that they can be printed on commercial press.<br />
<br />
I also had to use a good resizing algorithm that would retain image sharpness without <a href="http://therefractedlight.blogspot.com/2010/12/problem-of-resizing-images.html">aliasing artifacts</a>. And so I used the comprehensive <a href="http://www.imagemagick.org/">ImageMagick</a> utility, as I had on my two previous books. Even though it is a difficult-to-use command line interface, my workflow is smooth, if a bit error-prone if I’m not careful. The particular algorithm I use seems to produce sharper image when downsizing an image greatly, reducing the need for additional sharpening as an image gets smaller.<br />
<br />
With regards to image sharpening, this is best done on a linear image — that is, one where a <a href="http://en.wikipedia.org/wiki/Gamma_correction">gamma correction</a> has not been applied — but Photoshop only allows me to do this in a clumsy and unsatisfactory manner. I had to use at least 6 different sharpening methods because of one difficulty or another. By the way, I think sharpening is an important technique that can really use some more research. <br />
<br />
Since my first book, I used the <a href="http://enblend.sourceforge.net/">enfuse</a> software package — which is another UNIX command-line utility — to blend together multiple exposures, but it was having trouble with the huge dynamic range of my subjects, far greater than what I had used before. Important, bright, saturated highlights were being overexposed, or my shadow tones were turned to pure black, often with strange artifacts. I also had severe color shifts. As this is a critical component to my workflow, I went to the product developers, who graciously helped me out with this, and even provided me with a new build that overcame some problems I was seeing. As it turns out, I was using features that should not be used with images that have a gamma correction applied, and the new version supported images with gamma. Also, the package can do blending of colors using the <a href="http://en.wikipedia.org/wiki/CIECAM02">CIECAM02</a> color space — a model of color based on the properties of human color vision — which leads to more visually accurate results.<br />
<br />
I use a number of camera RAW file converters, simply because each converter has its strengths and weaknesses, and some images — for whatever reason — fail to convert well with one package or another. As I was busy processing my photographs for the book, I found out that <i>none</i> of my RAW converters worked on some images, giving me unappealing photos: extensive detail in the shadows or highlights were damaged no matter the setting, or colors were lost, or no adequate white balance could be achieved. It was a <b>desperate</b> situation, and I looked around for alternatives.<br />
<br />
I rediscovered the <i><a href="http://rawtherapee.com/">Raw Therapee</a></i> product. I downloaded it some years ago, but didn’t see much that interested me. But now, in my desperation, I needed something that would work, and so I got the latest version. Finding the user interface complex and not intuitive, I read the manual. As it turns out, many items on my wish lists are featured in this product, and it did what I needed it to do. Every problematic photo was processed easily and successfully by this product. Wishes granted.<br />
<br />
<hr />
<br />
At a bare minimum, I would like an image to plausibly look as I remember seeing the scene. Almost always I can see detail and saturated color in bright highlights, and I can see texture and color in almost the deepest of shadows in the real world. Granted, some scenes have so much dynamic range that some sort of HDR photography is called for, or supplemental lighting, but sometimes I have problems photographing scenes with flat lighting; for example, brightly colored flowers often produce problems. For my book, I often had bright saturated red colors which rendered poorly, with the red colors rendering without much texture, or where they shifted to an orange color even though the white balance was adequate.<br />
<br />
As mentioned, I would like an image to look as I remember seeing the scene, at least as a starting point in my processing. But this high expectation usually can’t be matched by standard camera JPEGs, and using RAW files is sometimes problematic as my experience with the various RAW converters demonstrates. Even if I do want to adjust the tonality of my final image for effect, I almost never want blown highlights and plugged shadows, but I find these defects even in scenes with flat lighting.<br />
<br />
There is one critical step in RAW processing — either in the camera or on the computer — which can harm the final image. No camera perceives colors like the human eye, and one way the camera approximates human color perception is via a blunt instrument: a color matrix is a transformation of the RAW pixels, where the various color channels are multiplied by factors and added and subtracted from each other to approximate visual colors: see the article <a href="http://therefractedlight.blogspot.com/2011/06/examples-of-color-mixing.html">Examples of Color Mixing</a> for an example of this. This addition and subtraction can easily force pixels to be either zero or to 255, which causes loss of texture. More complex way of converting the colors, using look-up tables (LUTs) or parametric curves, while giving more accurate color (up to a point), can harm an image in unpredictable ways, and may even put a cap on the quality of a conversion.<br />
<br />
This color conversion is mainly done these days via ICC profiles, a series of standards promulgated by the <a href="http://www.color.org/">International Color Consortium</a>. Adobe, however, uses its own profiles for its products. Not all profiles are created equally, even if they are all theoretically for the sRGB standard. Some profiles generate more noise than others; some will frequently clip highlights or plug shadows. For example, if you examine the blue color channel of a digital image, you might be dismayed as to how noisy it is; now part of this noise is simply due to the facts of digital capture, but the RAW conversion itself can generate large amounts of noise itself if a malformed profile is used, or if inadequate precision is used in the mathematics of the transform. You read about this phenomenon in the article <a href="http://ninedegreesbelow.com/photography/negative-primaries.html">ICC Color Space Profiles and Blue Channel “Noise”</a>; here we see that “noise” can simply be artifacts of color conversion, which leads to loss of texture in the final image. I’ve seen these kind of artifacts in my own experience in RAW conversion.<br />
<br />
<i>RawTherapee</i> uses high precision mathematics when converting a RAW file — it uses 32 bits per color channel as a standard, and it optionally can go to 64 bit per channel — and it attempts to use high-quality custom profiles for rendering the final image. It also allows unbounded calculations: even if an image is destined for the sRGB color space, if a particular pixel needs to go outside that color space temporarily, it will go outside, without being clipped. <i>Enfuse</i> also uses unbounded, high-precision mathematics for its calculations.<br />
<br />
Like <i>Enfuse</i>, <i>RawTherapee</i> now optionally uses the comprehensive CIECAM02 color space, which allows for visually precise manipulation of color and tone levels. It separates color from tonality in a way much more satisfactorily than the RGB and Lab color spaces used in Photoshop.<br />
<br />
It appears that most of my wishes have been granted in a way, although I must admit that these software packages lack the polish of expensive commercial software like Photoshop, and due to my time constraints, I have large gaps in my understanding of them and undoubtably are not using them optimally.<br />
<br />
<hr />
<br />
A while back, I decided that I was too concerned with camera gear and photographic technique, and that instead I needed to concentrate on more universal artistic concerns, such as composition, color, lighting, and mood. But it was precisely at that moment that a large number of technical roadblocks were placed in front of me, and I was forced to get an understanding of imaging technology before I could concentrate on to the other things. Ironic, yes? But this should be expected. Western culture, unfortunately, has developed an ‘art versus science’ mentality — and this disease is spreading to other cultures because it is thought to be progressive — but that was not always the case in the West. Rather, art and science are merely two aspects of the human person, and these ought to be joined together so as to produce fruitful offspring.<br />
<br />
My concern with all this high technology was so that I could get pleasing final images, and my existing technology failed in that purpose. But high precision mathematics can lead to artistically precise images and unbounded calculations remove bounds from artistic intent. It is all a part of one process.Mark S. Abelnhttp://www.blogger.com/profile/06692448528819277158noreply@blogger.com0tag:blogger.com,1999:blog-8768375296475349032.post-6057537751479077452013-03-16T08:12:00.000-05:002013-03-22T00:32:18.816-05:00Brief Advice on Learning Photography<span style="font-size: x-large;">SOMEONE WRITES</span> “I am buying a new camera, but cannot make up my mind…” The correspondent then lists a number of expensive recent-model cameras, commenting on various technical features, and why one might be more suitable than other for his stated purpose of taking photographs of sports, and of wildlife while hiking. He states that he is a beginner.<br />
<br />
<br />
A camera purchase can be an agonizing experience, especially given the choices available.<br />
<br />
When I first got into digital photography, back in the year 2001, I spent hours going over the reviews, and ended up getting a very expensive camera, one which was a top-rated camera back in those days. I wanted a camera mainly for taking nature photos while hiking.<br />
<br />
<a href="http://www.romeofthewest.com/2009/09/camera-diary.html">I was extremely disappointed in my photographs</a>, and wasn't able to return the camera for a refund. That disappointment killed my interest in photography for a number of years.<br />
<br />
Later, only when I needed to deliver good-quality photos, did I learn about photography, and so I discovered that my ‘bad’ camera was actually pretty good, especially after I learned the basics of composition, white balance, and exposure. Also, I learned to overcome some poor features of the camera with the right post-processing software and techniques.<br />
<br />
Be aware that the newest cameras today operate in a similar manner to cameras decades old; and old problems such as the color of light, and the basics of focus, exposure, and shutter speeds haven’t changed. Newer cameras won't do the thinking for you, although they try sometimes. <i>Don't expect that a camera will make your photography good. </i><br />
<br />
If you are a person who tends to get buyer’s remorse, then I would <i>not</i> suggest spending too much money on something, even if reviews and people like me strongly recommend it. However, I would avoid getting something that lots of people criticize. Rather, look for good values.<br />
<br />
Until you know what you are doing, you are merely guessing at this time. Don't worry too much about it, we've all gone through it. Here are some suggestions:<br />
<br />
<ul>
<li>Obtain or borrow an inexpensive camera, maybe one that is used. A super zoom camera might be good, or an older, used DSLR with a good zoom lens. If you are worried about making the right purchase, then spending only a little money on something OK might be better than getting an expensive camera that will be disappointing.</li>
<li>Go out and shoot lots of photos with it, under a variety of conditions, of various subjects.</li>
<li>Simultaneously, learn the basic theory of photography: exposure, shutter speeds, aperture, ISO speed, focus, etc. Learn the basics of general visual arts theory: composition, light, color: find good photographs and paintings and study them; find out what makes them good. Learn how to use your camera; don’t try to make lots of adjustments to your camera at first.</li>
<li>Get feedback from your images. You can post photos on the <span style="color: #0000ee;"><b><u>DPreview</u></b></span> forums and other places. It is most important that you post disappointing photos there, and ask why a particular photo might be disappointing. You are likely to get excellent feedback if you present a problematic photo and ask for advice for improvement.</li>
<li>Take the advice and experiment some more. See if your photos are less disappointing.</li>
<li>Find a fellow photo hobbyist or club and go out shooting with them, especially one who is more experienced.</li>
<li>I would not post photos on forums that you think are really good, expecting praise from others. Very often I’ve seen beginner photographers post photos, saying “look how great my photo is,” and they end up being savagely criticized. This is the nature of the modern artistic ego which seeks perfection: a thick skin is needed at times. A measure of humility is needed: rather, post a photo and ask for suggested improvements. Eventually you might be surprised and get lots of compliments.</li>
<li>Learn the basics of post-processing on the computer. Beginners tend to become enamored of special effects, but instead try to thoroughly learn the basics of levels, contrast, white balance, resizing, cropping, sharpening, etc.</li>
<li>Once you gain lots of hard-earned knowledge and experience, then you will pretty much know what kind of purchases you will need in the future.</li>
</ul>
<br />
Many beginners go through a <a href="http://therefractedlight.blogspot.com/2013/02/the-amateur-professional-and-artist.html">phase of loving the photographic process</a>, first placing lots of emphasis on gear, and then later on techniques. That is natural. But always keep your eye on the final purpose of photography: making good photographs. Mark S. Abelnhttp://www.blogger.com/profile/06692448528819277158noreply@blogger.com3tag:blogger.com,1999:blog-8768375296475349032.post-36314974489706588412013-02-04T22:37:00.001-06:002013-02-04T22:37:45.469-06:00The Amateur, the Professional, and the Artist<span style="font-size: x-large;">A BEGINNING PHOTOGRAPHER ASKS:</span> <i>Am I an amateur photographer? </i> The questioner also asks if the quality of his work shows that he is an amateur.<br />
<br />
Well, in answering questions of this type, it helps to first define the terms. Usually, we distinguish between amateurs and professionals: some photographers are definitely amateurs, and others are certainly professionals.<br />
<br />
The vast number of photographs are taken by ordinary people who have no particular connection with photography other than a desire to capture memories and images of loved ones, and so we can only call them photographers in the loosest sense. On the other hand, there are people who are most definitely photographers. What follows are distinct <i>types</i> of photographers and may not precisely correspond to individuals, rather, they are illustrations of largely mutually exclusive types which may be present in varying degrees in every photographer at any given time. <br />
<br />
<b>The Amateur</b><br />
<br />
The English word ‘amateur’ comes from the Latin <i>amator</i>, meaning ‘lover.’ In the best sense, an amateur photographer is one who loves photography, and in older literature this basic aspect of loving the art form is quite clear. If the questioner loves photography, then he is most likely an amateur.<br />
An amateur will spend countless hours learning and refining photographic techniques, often by taking numerous photos of the same subject, even brick walls, and he often spends more time reading about photography and visiting camera stores than actually photographing. The amateur photographer constantly reads camera reviews, is uncertain if his camera is good enough, and thinks that some upgrade or gadget will be the magic bullet that improves his photography. When shooting a scene, the amateur agonizes over his camera settings, and fumbles with the controls, often keeping his human subjects impatiently waiting. While the amateur may reluctantly volunteer to take some important pictures for friends and family, these pictures can end up being disappointments because of the amateur's uncertainty and fear.<br />
<br />
But the amateur loves every bit of time he spends on his hobby, even to the point that he pines away in pain when he is not doing it. He may nearly drool over camera reviews on the Internet, and his heart palpitates as he unboxes his new, finely shaped, and aesthetically pleasing camera — for which he paid far too much money. As most lovers know, pursuing the object of love can be seemingly irrational, obsessive, all-consuming, expensive, and even heart-breaking. <br />
<br />
I see no reason why an amateur photographer should be expected to produce good photographs, because perhaps the amateur loves photography — the art <i>itself</i> — instead of the final photographs, which are the <i>works</i> of that art. It is a subtle, but significant distinction, enjoying the technique and tools of photography over the fruit of these. But as children are sometimes unexpectedly produced by young lovers, an amateur photographer may occasionally produce a good image. <br />
<br />
<b>The Professional</b><br />
<br />
The English word ‘profession’ comes from the Latin <i>professio</i>, where it means, among other things, ‘a business or profession which one publicly avows,’ and this is the main current sense of the word. So a professional photographer is a photographer who does his work for the public instead of just for himself.<br />
<br />
The <i>business</i> of photography is mainly the art of selling and delivering photography and photographic services to the public. There are many professional photographers who consistently deliver the goods in a timely manner at a good price. These seek out clients, and give them what they pay for with no excuses or delays. They tell everyone they meet that they are a professional, and offer a wide variety of services, usually agreeing to give the customer what the customer wants. Working with them tends to be straight-forward or even pleasant. The majority of the professional's time is likely <i>not</i> spent doing photography, but rather doing those things that all businesses do, including marketing and sales.<br />
<br />
I see no reason why a professional photographer should necessarily be expected to produce outstanding photographs, for their <i>main</i> job is selling photography. They simply need to produce <i>good enough</i> work at a proportionally reasonably price, and do so in a manner that is convenient and pleasant for the client. As most business is repeat business, or comes from word-of-mouth referrals, social skills tend to be more important than technical skills. A digital image sitting on a computer, no matter how good, won't sell itself, but good marketing can sell a mediocre image. The professional, who may struggle to support himself by working long hours, needs to work quickly and efficiently, and needs tools that are reliable. One way that the professional speeds along his work is by using standard light setups, camera settings, and having a “house style”; these may not be optimal, but they work most of time, and most importantly, they lead to consistency. The professional simply does not have the time to fiddle with his equipment or processing as does the amateur.<br />
<br />
<b>The Artist</b><br />
<br />
Neither of the above definitions directly brings up the idea of image quality, since all we can be sure of is that the amateur loves photography, and the professional sells photography, and I've seen good and bad photos from amateurs and professionals. Instead, let us introduce a third kind of photographer, the photographic artist, who can be relied on to <i>consistently</i> deliver <i>high quality</i> photographs.<br />
<br />
With some innate talent perhaps, and by an understanding of theory and lots of practice — and maybe inspiration — the artist has internalized the art and has made it a part of himself. To the artist, making something good is a joy to himself and he greatly fears making junk. When you observe an artist making art, it appears to be effortless on his part, for the artist makes good art as a matter of habit. The artist intimately knows how his gear works: the camera almost seems to be an extension of his body.<br />
<br />
But note that the artist may not be pleasant to work with, may be demanding, may not charge reasonable prices for his art, may not show up at the shooting location on time, and might be grouchy and irritable during the shoot. The artist may be a terrible businessman, but he cares far less about the business relationship than about the quality of the final product. He might show up on location, spend at most a few minutes doing his work, and then abruptly leave to everyone's astonishment, or he might put the crew through hours of misery because he expects perfection, but in either case the final product will be <i>outstanding</i>.<br />
<br />
Unlike the professional, for whom time is money, the artist may spend an extensive amount of time analyzing the scene, taking measurements, and setting things up carefully — or not. Unlike the amateur, the artist knows how his gear works and what it can deliver under a wide variety of conditions, and so there is very little guesswork or trial-and-error involved. The artist might be highly concerned about his equipment, like the amateur, but will not be devoured by it like the amateur, knowing very well that “<i>all that glisters is not gold.</i>” He likely will make the best of whatever equipment he has at hand. <br />
<br />
By analogy, we could say that the artist is not like an awkward young lover, but rather more like an old happily married man who loves his spouse but does not obsess over her, and rather sees her as the better half of himself. The fruit of this union is quality works of art. <br />
<br />
<b>One More</b><br />
<br />
I ought to add <i>dilettante</i> to this list, someone who pursues a subject out of curiosity, or for being a well-rounded individual, or even to socially project the appearance of being an expert. A dilettante doesn't do photography for the love of the art, nor to make money doing it, nor for the purpose of making excellent final photographs, but for some other satisfaction. Being a dilettante can be perfectly harmless, or merely a half-hearted hobby. It can also slide into snobbery, which is highly undesirable.<br />
<br />
<b>Conclusion</b><br />
<br />
As I mentioned earlier, these are more archetypes than they are stereotypes, more like models that distill the essence of human motivation, and so actual human beings are likely to be a mixture of these, or slide from one to the other over time. An amateur may eventually become an artist — and very many artists started out first as lovers of the art. <br />
<br />
A professional might start out as an amateur and may be an artist, but also consider that many people choose professions due to social pressure, or family, or because they appear to be a desirable career, having nothing to do with art or the love of an art. If the professional takes time out from business to really work on their photography, they too may become an artist. Or perhaps, if an artist takes time out from his work to develop business skills, he too might become a decent professional.<br />
<br />
I am sure there are many photographers out there who combine the best of all three: they have a love of the art, they are good at business, and they produce exceptional photographs as a matter of course. That is a good target to aim for!Mark S. Abelnhttp://www.blogger.com/profile/06692448528819277158noreply@blogger.com1tag:blogger.com,1999:blog-8768375296475349032.post-57466184610454539802012-11-27T06:44:00.001-06:002012-11-27T06:44:12.386-06:00“I bought it for the frame”<span style="font-size: x-large;">DO YOU THINK</span> that your photography is good? Do other people — people that aren't your friends or family — think that it is good? How would you feel if one of your photographs found its way into a junk shop or a flea market, and someone purchased it <i>only because of its frame?</i><br />
<br />
Ultimately, we must be humble enough to realize that no matter how much
artistic vision and effort we put into something, a buyer simply might
like our photograph <i>only</i> because it is nicely framed. <br />
<br />
When I did a Google search for the phrase “<a href="https://www.google.com/search?q=%22i+bought+it+for+the+frame%22">I bought it for the frame</a>,” I got over ten million search results, with many people telling of some print or painting they found at a junk shop, but which they discarded, simply because they liked the frame. Now we mustn't jump to the conclusion that the frame-buyers are ignorant, tasteless philistines. Perhaps your photograph isn't all that good. Perhaps the frame is really good.<span style="font-size: x-small;"> <i>[NOTICE: I must admit to having a bit of anxiety whenever I go to a book fair or used book store, thinking that I might find one of m<span style="font-size: x-small;">y own books</span> being sold cheap.]</i></span><br />
<br />
Apparently, according to the same Google search, lots of people also buy bicycles only for their frame. They plan to strip the frame of all the seemingly more critically important stuff that actually makes the bicycle <i>work</i>, such as the wheels, gears, and chain. Certainly these working components are <i>more important</i> than the frame? Doesn't the frame just <i>sit there</i>? The answer is that in many respects these components <i>are</i> more important, and the components that just happen to be attached to the frame may not be all that good or fitting for the purchaser. The brakes and tires on a bicycle become gradually worn with use and slowly become less effective over time, and a bicycle rider can choose to replace them whenever it is convenient. But a bicycle frame must be perfectly durable, and it must not ever fail during use, for it cannot be repaired in the field: a frame does not slowly lose its functionality over time, for the welded joints on a frame are either rigid or are broken with no significant intermediate state. A frame, of course, can be repainted as needed.<br />
<br />
So what kind of framed photograph would be more valuable to most any given person: an excellent portrait of someone else's child, or a cheap snapshot of their <i>own</i> child? We ought to realize that prints and paintings are more important than a frame, but they tend to be more <i>personally</i> important. If someone buys a framed print at a flea market and then discards the original print, that is because <i>their</i> print is more important than the original. Likewise, someone may purchase a used bicycle, but they might replace the seat for one that is more comfortable for them; they might replace the brakes because they are worn, but if the frame isn't good, they won't buy the bicycle.<br />
<br />
A photograph or painting may be chosen because of a particular style of a room, or because of a particular mood expressed, or its use of particular coordinating colors. The subject matter may spark the imagination of the buyer, or the subject may invoke particular memories or devotions. An image may be discarded because it no longer fits the decor of the room, or it may invoke unpleasant memories: maybe it is faded or worn, or it is no longer interesting, or it is out of style.<br />
<br />
Now, there are some artists, particularly in the past, who strove to make images that have a more universal, timeless character, that expressed objective beauty and the <a href="http://www.romeofthewest.com/2011/08/on-sublime.html">sublime</a>. This is rare today because modernity rejects the eternal and universal in favor of that which is transitory and cheap. This means, perhaps, that contemporary works are more prone to being quickly discarded.<br />
<br />
People may buy a print because of its frame, but the frame is not bought for its own sake, no matter how well it is made or decorated, but because it is ultimately intended to enclose a print or a painting. I know of no museum or gallery that is dedicated to the presentation of frames as objects of art (although <a href="http://www.eliwilner.com/news/art-claim-frame.php">this</a> might be an exception), but there are vast numbers of merchants — including art galleries — that sell frames in a wide variety, and the cost of these frames may equal or exceed the cost of the image that is presented within it.<br />
<br />
Frames are works of art in themselves (as is anything intentionally well-made by man's intellect), but their purpose is mainly in relationship to the <i>fine art</i> contained within them. The word ‘fine’ in ‘fine art’ is related to Aristotle's understanding of the “<a href="http://plato.stanford.edu/entries/aristotle-causality/">final cause</a>” or ultimate <i>purpose</i> of a thing. The final cause of a frame is to support, display, protect, enhance, and delineate the work of art contained within it, as well as provide a visual transition between the work of art and its location. The buck stops at the image contained in the frame, as it is the final cause of the art: the job of art is complete and the viewer's job of looking at the image begins. But this does not mean that the frame is unimportant, for it has important functions, but it is subservient to those things, the images, which are greater. Even though we have differing opinions on what makes a good print or painting, we should not be surprised that most of us would largely agree on what what makes a good frame, for frames have a more definite purpose.<br />
<br />
Getting a good understanding of composition is difficult, because it involves human psychology. The many proposed rules of composition seem to rest on shaky theoretical ground, and many of the supposed examples of the use of the rules are unconvincing. However, one element of composition is concrete and objective, that being the framing or the specific crop of the image. See the article <a href="http://therefractedlight.blogspot.com/2012/01/composition-part-1-frame.html">Composition, Part 1 - the Frame</a> for a more in-depth discussion of this. The objective framing of an image, due to a specific crop, can be be a powerful tool of composition if used well, and bad framing can certainly harm an image.<br />
<br />
As the vast majority of images are rectangles, this suggests the good use of harmonic proportions between the length and width of the image with the proportions of the matting and size of the frame. The common standard print, matte, and frame sizes do express proportions that harmonize well with each other. Attempts at making custom frames and mattes for a non-standard print size will generally be expensive and error-prone. Custom sizes may also look awkward if the maker does not apply the mathematics of proportion ahead of time: for example, it may be possible to harmonize an image with a large <a href="http://en.wikipedia.org/wiki/Aspect_ratio">aspect ratio</a> within a frame with a smaller aspect ratio if the margins or matting are well-chosen, but if the ratios are not chosen well, the final object may look ridiculous, cheap, or inartistic. <br />
<br />
For all these reasons, I think that it would be prudent if photographers give serious consideration to framing, since, after all, nearly every print that will be displayed on a wall needs a frame, and for the simple fact that a purchaser may buy your print because they like its frame.<br />
Mark S. Abelnhttp://www.blogger.com/profile/06692448528819277158noreply@blogger.com0tag:blogger.com,1999:blog-8768375296475349032.post-47667084665592676392012-11-12T15:40:00.001-06:002012-11-12T15:40:19.661-06:00Announcement<span style="font-size: x-large;">YOU CAN NOW PURCHASE</span> my photographs online:<br />
<br />
<span style="font-size: x-large;"><a href="http://msabeln.zenfolio.com/">http://msabeln.zenfolio.com</a></span><br />
<br />
Please see my announcement <a href="http://www.romeofthewest.com/2012/11/announcement.html">here</a>.Mark S. Abelnhttp://www.blogger.com/profile/06692448528819277158noreply@blogger.com0tag:blogger.com,1999:blog-8768375296475349032.post-34115574633177483082012-11-02T22:53:00.001-05:002012-11-02T22:54:04.518-05:00Ancient Wisdom about Photography<span style="font-size: x-large;">SOME PHILOSOPHY,</span> often paradoxical, for your enjoyment…<br />
<br />
A bad camera can be the best tool for making a good photographer.<br />
<br />
The best photographers can make the best photographs with even poor cameras. For this reason, the best photographers use the best cameras.<br />
<br />
A poor photographer blames his camera; a good photographer blames himself. For this reason, good photographers use cameras they cannot blame.<br />
<br />
Many good photographs are due to luck. Good photographers are luckier than poor ones.<br />
<br />
You must not care what master photographers think of your photography, for they are prone to envy. You achieve this by carefully following the advice of master photographers.<br />
<br />
Any person with minimal aptitude can become a good photographer if they spend thousands of hours learning. Some people have a natural talent for photography; they develop this talent by spending thousands of hours learning.<br />
<br />
If you desire to be creative above all else, then your photographs will have a boring sameness. Do what has been done thousands of times before, but strive to do it better, then you will find yourself to be creative.<br />
<br />
A bird song may be pretty, but the song is not art. Find inspiration in the work of the masters, but strive to be a master in your own right.<br />
<br />
The business of photography is not the art of photography, for the art of selling a photograph is different from the art of making a photograph.<br />
<br />
You have mastered photography when it is graceful, effortless, and joyful. Your tools ought to appear to be a natural extension of yourself.<br />
<br />
Cameras change and technologies change, but art never changes, for art is inside and flows from above.<br />
<br />
Being a good photographer does not mean that you are a good person. It simply means that your photographs are good.<br />
—<br />
<br />
<i>[Post-processing is the work done on digital images using a computer image processing program such as Photoshop; also, this will include traditional darkroom work for photographic film.]</i><br />
<br />
All photographs are post-processed; one just has to understand the meaning of post-processing. <br />
<br />
If you capture a good image in the camera, then that image needs no post-processing. In order to do good post-processing, you need to capture a good image in the camera.<br />
<br />
To master photography, you must master post-processing. You have mastered post-processing when it appears as if you did not use post-processing.<br />
<br />
You must master Photoshop by mastering its functions. You master Photoshop’s functions by never using most of them. Likewise, the worst Photoshop books are those that explain all of its functions, and the best are those that explain only a few.<br />
<br />
In order to post-process a photograph of a subject, you must bring out the subjectness that the photograph failed to capture.<br />
<br />
To sharpen a photograph in Photoshop, you should not use the Sharpen function, but rather use the Unsharp function.<br />
<br />
To achieve utter freedom and creativity in post-processing, you must enslave yourself to the logic and mathematics underlying post-processing.<br />
<br />
The sRGB color space is worst color space because it represents the narrowest range of colors of any standard RGB color space. For this reason, sRGB is the best color space to use in post processing.<br />
<br />
Do not trust your eyes, for they deceive you, and so you must measure the color numbers to ensure that they are good. But you must trust your eyes, for if the image does not look good, then the color numbers must not be good.<br />
<br />
You must spend thousands of hours post-processing images in order to post-process images quickly.<br />
<br />
If you must ask if Photoshop is the right post processing software for you, then Photoshop is the wrong software for you. Mark S. Abelnhttp://www.blogger.com/profile/06692448528819277158noreply@blogger.com0tag:blogger.com,1999:blog-8768375296475349032.post-85036532777437469402012-08-29T16:11:00.000-05:002012-08-29T16:11:51.848-05:00Composition in Landscapes and the Photography of Marcin Sobas<span style="font-size: x-large;">SOME INSPIRATIONAL LANDSCAPE</span> photography, from Polish photographer Marcin Sobas, can be found <a href="http://500px.com/MarcinSobas">here</a>.<br />
<br />
<table cellpadding="2"><tbody>
<tr><td style="border-bottom: 0px solid #fff;"><a href="http://500px.com/photo/8507724"> <img alt="Ruins by Marcin Sobas (MarcinSobas) on 500px.com" border="0" height="280" src="http://pcdn.500px.net/8507724/3b9d4809d089216b70add8fedfd0bddc029b5fb9/3.jpg" style="margin: 0 0 5px 0;" width="280" /> </a> <br />
<i><span style="font-size: x-small;"> <a href="http://500px.com/photo/8507724">Ruins</a> by <a href="http://500px.com/MarcinSobas">Marcin Sobas</a></span></i></td></tr>
</tbody></table>
<br />
Sobas has lately gained a lot of positive attention for his remarkable landscapes of Moravia and Tuscany. <br />
<br />
A while back, I made an effort to learn why some landscape photography has great appeal, and I attempted to identify the common characteristics of great landscape images. Now, there is no end to advice that can be found on the subject of landscapes, but I desire to discover those characteristics that are more <i>certain</i> and <i>definite</i>. Some of my observations can be found in the article <a href="http://therefractedlight.blogspot.com/2012/06/composition-part-2-composition-and.html">Composition, Part 2 - Composition and Subject in Landscape Photography</a>.<br />
<br />
From my analysis of highly-regarded landscape images, I found some characteristics that nearly all of them share. These ought not be considered unbreakable rules, nor should this list be considered exhaustive, for they are not the <i>only</i> things that photographers consider; rather this is simply what I saw, and there could be great landscapes that are otherwise.<br />
<br />
1. Almost by definition, a landscape ought to have a superhuman scale. Good landscapes depict scenes that dwarf the human person, and so have the characteristic of <i>sublimity</i>. The sublime describes “a sense of awe, grandeur, or greatness, something that is lofty to an extreme degree, so much so that it dwarfs the human person in insignificance.” See the article <a href="http://www.romeofthewest.com/2011/08/on-sublime.html">On the Sublime</a> for more details. A sublime scene may or may not be a beautiful scene, but it certainly has to be <i>big</i>, and Sobas’ images show rather big scenes that are sublime and beautiful.<br />
<br />
Imagine taking a photograph of a small garden; the flowers may be beautiful, but the scene will likely lack sublimity, because the garden is of human scale. This problem of scale concerned the designers of the Victorian-era Tower Grove Park in Saint Louis, Missouri, USA, and they knew that the sublime would not be possible in their park. The results are pretty, but not lofty, as I show in the article <a href="http://www.romeofthewest.com/2009/05/photos-of-tower-grove-park.html">here</a>.<br />
<br />
2. Unusual use of lenses can make for better landscape photos. Beginning landscape photographers often desire ultra-wide angle lenses so as to “get the whole scene in.” But consider that wide angle lenses not only get in the whole scene, but at the same time they make distant objects recede in size and scale, taking away the impression of sublimity. Wide angle lenses instead emphasize the foreground, which may include objects of a more human scale, while reducing the grand vistas of the background.<br />
<br />
Instead, Sobas often uses a telephoto lens, a Canon 70-200mm f/4 L-series lens, which gives a horizontal angle of view of 18.2 to 6.4 degrees on his Canon 40D camera. This narrow angle of view provides foreshortening — making distant objects appear closer to each other — as we see with the hills in the photograph above. The use of a telephoto exaggerates the vertical dimension at the expense of perceived depth. Would the scenes have appeared as sublime if he had stood closer, and had used a wide-angle lens?<br />
<br />
You may, however, consider the final size of your image and how close you will view it: if you are creating a panorama that will cover the wall of a room, then small detail becomes more prominent, and so a wider angle of view may not decrease the impression of sublimity.<br />
<br />
Also note that Sobas often uses a high camera angle. Instead of just seeing one line of ridges, we can see multiple lines of ridges and hilltops, one behind the other, which increases the grandeur of the scenes.<br />
<br />
3. Good landscapes are almost always taken around sunrise or sunset, or at night. I’m not saying that good landscapes<i> can’t</i> be taken at midday, I’m just saying that they typically <i>aren’t</i>. The lighting angle during the extremities of the day is low, and so shadows thrown are long, and serve to model the undulating terrain. In this way, early or late landscape photography is like using Rembrandt lighting for portraiture, which models the human face with shadow. Harsh lighting, like we find at midday, will often underexpose shadows or overexpose highlights; on the contrary, with the sun at a low angle, the sky acts as a great fill-in light. The attenuated orange light from the sun provides a good contrasting color with the blue of the sky, giving us far more color during the preferred times of day.<br />
<br />
<table cellpadding="2"><tbody>
<tr><td style="border-bottom: 0px solid #fff;"><a href="http://500px.com/photo/9457165"> <img alt="Autumn ... by Marcin Sobas (MarcinSobas) on 500px.com" border="0" height="280" src="http://pcdn.500px.net/9457165/40a9c6f03db5936bce6094c7760fe089b539c0e6/3.jpg" style="margin: 0 0 5px 0;" width="280" /> </a> <br />
<i><span style="font-size: x-small;"> <a href="http://500px.com/photo/9457165">Autumn ...</a> by <a href="http://500px.com/MarcinSobas">Marcin Sobas</a></span></i></td></tr>
</tbody></table>
<br />
According to <a href="http://500px.com/blog/173/portrait-marcin-sobas">this interview</a>, Sobas prefers cloudless mornings for his shooting. I’ve noticed that while sunsets are often pretty, the sky at sunrise is usually dull, but this makes for a better, more uniform light for this kind of work. <br />
<br />
4. Unusual weather can help improve a landscape photo. Dramatic stormy skies and snow on the ground can turn an ordinary landscape into something more special. Sorbas likes foggy mornings to make his photos more interesting:<br />
<br />
<table cellpadding="2"><tbody>
<tr><td style="border-bottom: 0px solid #fff;"><a href="http://500px.com/photo/4921473"> <img alt="Rays by Marcin Sobas (MarcinSobas) on 500px.com" border="0" height="280" src="http://pcdn.500px.net/4921473/852f74b7e1fbf5b87a673c5a3082415254ecd9ec/3.jpg" style="margin: 0 0 5px 0;" width="280" /> </a> <br />
<i><span style="font-size: x-small;"> <a href="http://500px.com/photo/4921473">Rays</a> by <a href="http://500px.com/MarcinSobas">Marcin Sobas</a></span></i></td></tr>
</tbody></table>
<br />
He recommends getting some knowledge of weather so as to predict the best times for taking photos. The <a href="http://www.lawrencevilleweather.com/fogmaps/us">Lawrenceville Weather</a> website includes a fog forecast map for the lower 48 United States; I refer to this map frequently to find interesting shooting conditions. Also of use is <a href="http://photoephemeris.com/">The Photographer's Ephemeris</a>, an application that calculates the angle of the sun; this can help to predict the direction of shadows, which may lead to better compositions.<br />
<br />
5. Good landscapes usually have a full range of tones or color. Sobas subtly post-processes his images, and the final results do have a broad range of tones. The simple use of the levels tool, and saturation or vibrance — not done too strongly — can enhance a landscape photo without making it look overprocessed. Choosing the right subject, exposure, white balance, time of day, time of year, and weather conditions all contribute to getting good color.<br />
<br />
6. Good landscapes typically have a unity and harmony, and avoid distracting details. A certain measure of abstraction works well. Again, many of Sorbos’images are so abstract that they, at first glance, appear to be paintings, but instead they are almost undoubtedly straight camera images with some mild postprocessing.<br />
<br />
This is perhaps the most difficult part of landscape photography: what subject, what camera position, and what lens and cropping best suit the image? A good photographer ought to be able to view a scene, taking in both the subject as well as potentially distracting elements, instead of merely doing the same back home on the computer. Especially when an image is to be displayed at a small size on a computer screen, a large measure of abstraction is needed, more so than if the final image is larger.<br />
<br />
7. Remember that photographs are made to be viewed by human beings, and adding a bit of human interest to an image may make a photograph more interesting to your viewers. Having a human in a landscape can draw attention to it, and in the best examples, can transform an ordinary landscape photograph into a dreamscape, deepening its emotional impact. From what I've seen, Sobas does not often include humans in his photos, but we do see buildings, boats, roads, and sometimes animals. I might add that most or all of these images depict landscapes that have been heavily altered by humans, perhaps over thousands of years, but in a harmonious way, and so they have an organic look to them.<br />
<br />
8. Good landscape photos are usually made with good equipment and good technique. Because landscapes may not be as intrinsically interesting as a human figure, it takes extra effort to attract the eye. Journalistic style images can be rough, and that does not distract from them; indeed, a rough image may have a feeling of immediacy about it. Landscapes, on the other hand, are more timeless, and seem to call for more perfection. <br />
<br />
<hr />
There are any number of rules or principles used in landscape painting and photography, and the brief list above are merely my observations of what most good landscapes definitely seem to share. I haven’t mentioned commonly-cited principles such as the use of diagonals, leading lines, the <a href="http://therefractedlight.blogspot.com/2010/07/rule-of-thirds.html">rule of thirds</a>, balance, avoiding subjects leaving the scene, the use of S curves, having a definite center of attention, and so forth, simply because these principles, in my mind, aren’t certain, or perhaps I simply don’t understand them well enough. Human psychology is complex, but some things are more certain than others; getting the basics right is more important than the subtleties. After knowledge, experience, and inspiration, comes more perfection.Mark S. Abelnhttp://www.blogger.com/profile/06692448528819277158noreply@blogger.com0tag:blogger.com,1999:blog-8768375296475349032.post-33781292996502174532012-08-20T13:31:00.002-05:002012-10-09T06:35:15.579-05:00Sensor Size and the Total Quantity of Light<span style="font-size: x-large;">IT IS SOMETIMES SURPRISING</span> that even inexpensive cameras can take good quality images in the bright mid-day sun. I've seen many photos from cell-phones and from ridiculously cheap point-and-shoot cameras that have more than adequate image quality. Maybe these images aren't particularly optically sharp, but even low-end cameras can produce images in full sunlight that have a low amount of digital noise. They are “good enough.” <br />
<br />
But one of the overarching rules of thumb in photography is that the larger the sensor size (or film size), generally speaking, the better the image quality of the final photograph. A bigger sensor makes it easier to have a lower noise image, a bigger sensor makes it easier to make matching quality optics, a bigger sensor makes it easier (up to a point) to make an ergonomic camera, and so forth. Now, I'm not saying that quality photos can’t be taken with a tiny image sensor, rather, it is easier to take an image with higher technical image quality by using a larger sensor. See the article <i><a href="http://therefractedlight.blogspot.com/2010/07/one-easy-rule-for-quality-images.html">One Easy Rule for Quality Images</a></i> for more details.<br />
<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjDTrnZhzJBK9cxn5Tz2keCpFt6ljjuJZvPqhBDh7eDyyqHXZuMNHeP9JEzUbigV6DcF9wmnjyaL0tU0F1T7e1Qyk9wLRXGogO3_ElQ9u2lQQkECMzFtbmAunvI0fBssTnKjMG0-KT9WNU/s1600/500px-Sensor_sizes_overlaid_inside_-_updated.svg.png"><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjDTrnZhzJBK9cxn5Tz2keCpFt6ljjuJZvPqhBDh7eDyyqHXZuMNHeP9JEzUbigV6DcF9wmnjyaL0tU0F1T7e1Qyk9wLRXGogO3_ElQ9u2lQQkECMzFtbmAunvI0fBssTnKjMG0-KT9WNU/s1600/500px-Sensor_sizes_overlaid_inside_-_updated.svg.png" /></a><br />
<br />
<i>A comparison of camera sensor sizes. [<a href="http://en.wikipedia.org/wiki/File:Sensor_sizes_overlaid_inside_-_updated.svg">Source and attribution</a>]</i><br />
<br />
As noted, even low-end digital cameras can produce good images in broad daylight. The problem is that their image quality tends to sharply decline as the light gets dimmer. These cameras, taking photos under dim incandescent lighting, produce images that are a noisy mess, with terrible color rendition and digital grain ruining the sharpness of the image. Now, perhaps a tripod could help, but certainly these kinds of cameras are very disappointing for hand-held images.<br />
<br />
Our eyesight doesn’t work as we might naïvely think. One scene, which to our eyes appears to be slightly dimmer than another, might in fact have <i>half</i> of the total amount of light falling on it. Likewise, a scene that appears to be only <i>somewhat</i> brighter than another might in reality be twice as bright. In particular, where I live, in the mid-lattitudes of the northern hemisphere, we get to enjoy long periods of dusk in mid-summer; the fading daylight seems to last for hours, until we finally notice that it is very dark out. Our eyes valiantly attempt to see in the dimming light, until the laws of physics and biology finally conspire against our vision, and we are plunged into darkness. Our eyes attempt to flatten out the huge range of brightness that we experience.<br />
<br />
A hazy day may be objectively half as bright as a sunny day, although it certainly seems to be only slightly dimmer. A cloudy day may be one fourth as bright, while an overcast day may be one eighth as bright. At sunset, it may be one sixteenth as bright as a bright sunny day, and a bright day may be thirty two times as bright as what we find at dusk. On ground covered with snow or white sand, a scene may be twice as bright as what we are accustomed to, and there is a real risk of contracting snow blindness due to the excessive amount of light. <br />
<br />
Cameras, like eyes, are designed to work over a large range of brightness. Camera lenses have adjustable apertures to vary the amount of light hitting the sensor, and the shutter speed can be varied over a large ranges of values. The sensors also have varying amounts of sensitivity to light. But a sensor with twice the surface area of another collects twice the total amount of light, and we could assume (all things otherwise being equal) that it can operate similarly in light that is half as bright.<br />
<br />
Now I've taken decent photos in dark places with a cheap point-and-shoot camera, but that was only when the camera was sitting on a tripod and its shutter was open for a long time. I certainly could not hand-hold the camera and expect to get anything else except digital noise. However, I can and do often take fairly decent hand-held shots at dusk with my Nikon DSLR. The major difference between these two cameras is simply the size of the sensor: the Nikon lets in a far larger total amount of light.<br />
<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhCaedFho2Naag_5zguRXC6N9keHOr3t6SkS_n1czRXHv5e7KX1K8BAqT5fV8EXsuTxZEvIwi7wPGCi_YvFoxqb6fKlwU_eAD_V3d4zlGUktyY3gEdRhyphenhyphenzcZDKyjaMheT8fTEV2KuOqly8/s1600/Sensor+size+comparison.jpg"><img height="753" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhCaedFho2Naag_5zguRXC6N9keHOr3t6SkS_n1czRXHv5e7KX1K8BAqT5fV8EXsuTxZEvIwi7wPGCi_YvFoxqb6fKlwU_eAD_V3d4zlGUktyY3gEdRhyphenhyphenzcZDKyjaMheT8fTEV2KuOqly8/s640/Sensor+size+comparison.jpg" width="500" /></a><br />
<br />
<i>Same scene taken with a newer cell phone camera on top, and an older DSLR camera on the bottom.</i><br />
<br />
We know that cheap point-and-shoot cameras, selling for less than US$50, and having tiny sensors, can take good images in broad daylight with little digital noise. Let us take this quality as our baseline, and determine what size of a sensor we need if we want to take images of similar quality and with similar camera settings under dimmer lighting. This table shows standard digital sensor sizes, along with the lighting conditions that would be equivalent to typical cell phone cameras in bright daylight:<br />
<br />
<table border="0" cellspacing="0" cols="4" frame="VOID" rules="NONE"> <colgroup><col width="125"></col><col width="223"></col><col width="86"></col><col width="190"></col></colgroup> <tbody>
<tr> <td align="LEFT" bgcolor="#C0C0C0" height="32" style="border-bottom: 1px solid #000000;" width="125"><b>Sensor size</b></td> <td align="LEFT" bgcolor="#C0C0C0" style="border-bottom: 1px solid #000000;" width="223"><b>Use</b></td> <td align="CENTER" bgcolor="#C0C0C0" style="border-bottom: 1px solid #000000;" width="86"><b>Sensor area in square millimeters</b></td> <td align="LEFT" bgcolor="#C0C0C0" style="border-bottom: 1px solid #000000;" width="190"><b>Lighting condition</b></td> </tr>
<tr> <td align="LEFT" height="17">1/4” </td> <td align="LEFT">Cell phones and toy digital cameras.</td> <td align="CENTER">7.68</td> <td align="LEFT">Bright daylight</td> </tr>
<tr> <td align="LEFT" height="17"><span style="background-color: #fff2cc;">1/3.2”</span></td> <td align="LEFT"><span style="background-color: #fff2cc;">Premium cell phone cameras.</span></td> <td align="CENTER"><span style="background-color: #fff2cc;">15.5</span></td> <td align="LEFT"><span style="background-color: #fff2cc;">Hazy sunlight</span></td> </tr>
<tr> <td align="LEFT" height="17">1/2.3”</td> <td align="LEFT">Compact digital cameras.</td> <td align="CENTER">28</td> <td align="LEFT">Cloudy bright</td> </tr>
<tr> <td align="LEFT" height="17"><span style="background-color: #fff2cc;">1/1.7”</span></td> <td align="LEFT"><span style="background-color: #fff2cc;">Premium compact cameras.</span></td> <td align="CENTER"><span style="background-color: #fff2cc;">43</span></td> <td align="LEFT"><span style="background-color: #fff2cc;">Light overcast</span></td> </tr>
<tr> <td align="LEFT" height="17">2/3”</td> <td align="LEFT">Some bridge cameras.</td> <td align="CENTER">58</td> <td align="LEFT">Heavy overcast</td> </tr>
<tr> <td align="LEFT" height="17"><span style="background-color: #fff2cc;">CX or 1”</span></td> <td align="LEFT"><span style="background-color: #fff2cc;">Nikon 1 series.</span></td> <td align="CENTER"><span style="background-color: #fff2cc;">116</span></td> <td align="LEFT"><span style="background-color: #fff2cc;">Sunset</span></td> </tr>
<tr> <td align="LEFT" height="33">Micro 4/3rds</td> <td align="LEFT">Olympus and Panasonic mirrorless interchangeable lens cameras.</td> <td align="CENTER">225</td> <td align="LEFT">Dusk</td> </tr>
<tr> <td align="LEFT" height="32"><span style="background-color: #fff2cc;">APS-C</span></td> <td align="LEFT"><span style="background-color: #fff2cc;">Most Nikon, Pentax, and Sony DSLRs; lower-end Canon sensors are slightly smaller at 329 square mm. Also found in some premium rangefinder cameras.</span></td> <td align="CENTER"><span style="background-color: #fff2cc;">370</span></td> <td align="LEFT"><span style="background-color: #fff2cc;">Indoor sports, stage shows</span></td> </tr>
<tr> <td align="LEFT" height="32">35mm, “Full frame”</td> <td align="LEFT">High-end cameras from Nikon, Pentax, Sony, Canon, and Leica.</td> <td align="CENTER">864</td> <td align="LEFT">Bright street lighting at night</td> </tr>
</tbody> </table>
<br />
A camera sensor that has twice the surface area ought to produce an image with a
similar amount of digital noise when the lighting is half as bright,
all things else being equal. Certainly there are more factors involved, but sensor size is one of the most significant when it comes to image noise. <br />
<br />
Photojournalists tend to use the cameras near the bottom of the list, especially if they need to capture a scene in dim lighting without the use of a flash. Note that the Micro 4/3rds cameras are fairly close in sensor size to the APS-C sized cameras, and their discrete size and noiseless operation make them viable for some work under dim lighting. Manufacturers have recently been putting the larger APS-C and 35mm sensors into compact cameras, which many photographers find highly desirable.<br />
<br />
For more information, along with some of the data I used to make the table above, see these Wikipedia articles:<br />
<ul>
<li><a href="http://en.wikipedia.org/wiki/Image_sensor_format">Image sensor format</a></li>
<li><a href="http://en.wikipedia.org/wiki/Exposure_value">Exposure value</a></li>
</ul>
Mark S. Abelnhttp://www.blogger.com/profile/06692448528819277158noreply@blogger.com0tag:blogger.com,1999:blog-8768375296475349032.post-47181395157554735672012-07-30T15:24:00.001-05:002012-07-30T15:25:02.986-05:00“Clayton, Missouri: An Urban Story”<a href="http://www.flickr.com/photos/msabeln/7679158184/" title="Clayton_cover by msabeln, on Flickr"><img src="http://farm9.staticflickr.com/8163/7679158184_55ca8e6f2c_z.jpg" width="488" height="640" alt="Clayton_cover"></a><br />
<br />
The latest book of my photography can now be pre-ordered <a href="http://www.amazon.com/gp/product/1935806335/ref=as_li_ss_tl?ie=UTF8&camp=1789&creative=390957&creativeASIN=1935806335&linkCode=as2&tag=romeofthewest-20">here</a>.Mark S. Abelnhttp://www.blogger.com/profile/06692448528819277158noreply@blogger.com0tag:blogger.com,1999:blog-8768375296475349032.post-38356035313676188452012-07-27T19:46:00.000-05:002012-07-27T19:46:32.744-05:00Two-Color (Or One-Axis) Color Systems<span class="Apple-style-span" style="font-size: x-large;">RESEARCH INTO COLOR</span> motion pictures <a href="http://en.wikipedia.org/wiki/List_of_color_film_systems">started soon after </a>cinematography itself was invented in the late 19<sup>th</sup> century. While color photography at that time was already well-established in the laboratory and by intrepid amateurs, cinema had its own problems, notably the need to project multiple frames per second in order to give the illusion of motion.<br />
<br />
The main method of making color photographs was suggested in 1855 by the Scottish physicist James Clerk Maxwell. By exposing three photographic plates separately through red, green, and blue filters, and then projecting those images, overlapping, through the same filters, would then produce a color image on a screen. Or the same images could be printed on paper using various colored inks.<br />
<br />
The main problem was determining how to do the same thing with cinematography. Any method devised would have to be visually impressive, relatively inexpensive, and would have to be extremely reliable, especially during projection at the theater. Using three cameras with three color filters was out of the question, due to parallax problems, and worse was the great expense and difficulty of aligning three separate projectors.<br />
<br />
Compromises had to be made, and one such compromise was using only <a href="http://en.wikipedia.org/wiki/RG_color_space">two colors</a>: some color, perhaps, is better than no color. Film stock is transparent and has two sides, and many methods were devised so that one side would be sensitive to one range of colors, with the other side being sensitive to another range of colors. The film would be developed, producing an image on both sides, which were then dyed to the appropriate colors. The film could then be projected through standard projectors with no additional equipment needed. Surprisingly, very many films were created with the two-color method, starting in 1908, becoming common in the 1920s, and this was still used until the 1950s. But few of these color films remain with us today, and many of those survivors are now only available in monochrome versions specially made for early television.<br />
<br />
While the two-color method died out in favor of three-color cinematography, by no means should we think that these kinds of methods are completely obsolete, being only temporary solutions limited to a particular place and time in history. Instead, I think that these methods, reinvented with digital technology, are interesting in their own right and can be used by contemporary photographers for artistic purpose. My related research on imitating Autochrome, an early color photographic process with a more limited color palette than is now standard, can be found <a href="http://therefractedlight.blogspot.com/search/label/Autochrome">here</a>.<br />
<br />
<a name='more'></a><br />
Consider this photograph taken in Forest Park, in Saint Louis, Missouri, during a ballon race:<br />
<br />
<a href="http://www.flickr.com/photos/msabeln/7638467028/" title="Great Forest Park Balloon Race (2011), in Saint Louis, Missouri, USA - balloon and two girls - original by msabeln, on Flickr"><img alt="Great Forest Park Balloon Race (2011), in Saint Louis, Missouri, USA - balloon and two girls - original" height="332" src="http://farm8.staticflickr.com/7272/7638467028_7a47371c88.jpg" width="500" /></a><br />
<br />
Many of the early two-color process films used green and red filters. My first naïve attempt at two-color photography was simply to eliminate the blue channel in Photoshop, by filling it with black:<br />
<br />
<a href="http://www.flickr.com/photos/msabeln/7638469102/" title="Great Forest Park Balloon Race (2011), in Saint Louis, Missouri, USA - balloon and two girls - no blue channel by msabeln, on Flickr"><img alt="Great Forest Park Balloon Race (2011), in Saint Louis, Missouri, USA - balloon and two girls - no blue channel" height="332" src="http://farm8.staticflickr.com/7127/7638469102_12d203e31e.jpg" width="500" /></a><br />
<br />
Instead of mainly red and green colors, we here have mainly yellow. As yellow is the opponent color to blue, and since we eliminated the blue color, we should expect to get lots of yellow. Oddly enough, my eye still sees some blue here when none exists, but is this simply because I know what the colors ought to be, or because some other subtle effect is happening? I do know that Edwin Land, the inventor of polarizing filters and Polaroid instant picture film, thought that he could get full-color images from only two color filters, although his research remains seemingly impossible and is controversial to this day, was never put into a commercial product, and I’ve never seen a convincing demonstration of it — but still I am not sure.<br />
<br />
Now I can’t say that this yellow image simulates a two-color film process, for I can’t say that I’ve ever seen a two-color film. In a darkened theater, the eye’s own automatic white balance function would be active: but would the yellow colors appear to be closer to white? I have no way of verifying this. Now, normal attempts at white balancing this image in RGB — leaving us only green and red tones — is not possible, since we don’t have a blue channel.<br />
<br />
Now, I can bring the white colors in the original scene back to neutral by adding a blue layer on top of this image in color mode and 50% opacity, but the shadows are then given a blue color, which is not what we want here.<br />
<br />
Alternatively, I can use Photoshop’s <a href="http://help.adobe.com/en_US/photoshop/cs/using/WSfd1234e1c4b69f30ea53e41001031ab64-764fa.html#WSfd1234e1c4b69f30ea53e41001031ab64-765fa">Photo Filter</a> function, which simulates the use of color filters placed in front of a camera lens while shooting. I must admit that I find this function to be rather mysterious, for as far as I can tell, it does things that cannot be reproduced by the use of curves and levels or any other type of processing in the RGB color space. Perhaps it moves the image into another color space, such as Lab, but I cannot verify this.<br />
<br />
What I did with this image is use a Photo Filter layer, using the RGB primary blue color (0, 0, 255), at 100% Density, 50% opacity, and checked Preserve Luminosity. The image still had an overall color tone, but I was able to white balance it easily using Curves:<br />
<br />
<a href="http://www.flickr.com/photos/msabeln/7638851436/" title="Great Forest Park Balloon Race (2011), in Saint Louis, Missouri, USA - balloon and two girls - red and green 2 by msabeln, on Flickr"><img alt="Great Forest Park Balloon Race (2011), in Saint Louis, Missouri, USA - balloon and two girls - red and green 2" height="332" src="http://farm8.staticflickr.com/7121/7638851436_e816f34b5c.jpg" width="500" /></a><br />
<br />
This is almost precisely what I expected to get. Now what use is this kind of processing? I leave that up to you.<br />
<br />
We can get two more variations of this method by blacking out the red or green channels, and using the Photo Filter with the color of whatever channel is eliminated, but the expected results we got with red-green colors are not duplicated:<br />
<br />
<a href="http://www.flickr.com/photos/msabeln/7638956464/" title="Great Forest Park Balloon Race (2011), in Saint Louis, Missouri, USA - balloon and two girls - blue and green by msabeln, on Flickr"><img alt="Great Forest Park Balloon Race (2011), in Saint Louis, Missouri, USA - balloon and two girls - blue and green" height="332" src="http://farm8.staticflickr.com/7260/7638956464_9085eb959c.jpg" width="500" /></a><br />
<br />
Using only the blue and green channels does not give us a blue and green image, but rather green and violet. What is going on here? Clearly I don’t understand color vision as well as I’d like, nor do I understand the processing used by Photo Filter.<br />
<br />
Likewise, using only the red and blue channels does not give us an image in those colors, but rather blue and yellow:<br />
<br />
<a href="http://www.flickr.com/photos/msabeln/7638958186/" title="Great Forest Park Balloon Race (2011), in Saint Louis, Missouri, USA - balloon and two girls - red and blue by msabeln, on Flickr"><img alt="Great Forest Park Balloon Race (2011), in Saint Louis, Missouri, USA - balloon and two girls - red and blue" height="332" src="http://farm9.staticflickr.com/8143/7638958186_3398cb0d1f.jpg" width="500" /></a><br />
<br />
My apologies to the young ladies for the horrible skin tones on the last two images.<br />
<br />
The Photo Filter function of Photoshop seems to be rather powerful, and I am convinced that it transcends the RGB color model, but it does not work in a manner that I consider predictable or in a way that I understand. Far more understandable is the Lab colorspace in Photoshop, which implements a color system based on studies of human color vision. See the article “<a href="http://therefractedlight.blogspot.com/2012/02/color-spaces-part-4-lab.html">Color Spaces, Part 4: Lab</a><i>”</i> for more information. Like RGB, Lab describes color with three numbers, but instead of specifying the amount of red, green, and blue light, it uses one number for lightness, and two others for specifying color.<br />
<br />
The two Lab color axes are <i><b>a</b></i>, which specifies a range of colors going from a slightly bluish green to magenta, and <i><b>b</b></i>, which goes from a slightly orangish yellow through a sky blue color. These color pairs are opponent colors: if you would mix a negative ‘a’ color with a positive ‘a’ color of equal value, you should get a neutral gray.<br />
<br />
RGB uses three fixed primary colors, and the specific primary colors used by any RGB color model is specified by the color standard used: sRGB uses primary colors that are closer together than does Adobe RGB. The choice of primary colors limits our color gamut. Lab, on the contrary, does not use primary colors, but instead the two color axes are theoretically unlimited, thereby allowing any color whatsoever to be represented.<br />
<br />
Moving the image to Lab, and then by setting either the <b><i>a</i></b> or <b><i>b</i></b> channels to 50% Gray, we can quickly get a two-color (or alternatively, a one-axis) image. Here are the colors of the a channel:<br />
<br />
<a href="http://www.flickr.com/photos/msabeln/7639331814/" title="Great Forest Park Balloon Race (2011), in Saint Louis, Missouri, USA - balloon and two girls - Lab a by msabeln, on Flickr"><img alt="Great Forest Park Balloon Race (2011), in Saint Louis, Missouri, USA - balloon and two girls - Lab a" height="332" src="http://farm9.staticflickr.com/8165/7639331814_057959df16.jpg" width="500" /></a><br />
<br />
In the photo above, the b channel is neutral, and this shows us a range of colors available in the a channel. If we neutralize the a channel instead, we get a different range of colors:<br />
<br />
<a href="http://www.flickr.com/photos/msabeln/7639333502/" title="Great Forest Park Balloon Race (2011), in Saint Louis, Missouri, USA - balloon and two girls - Lab b by msabeln, on Flickr"><img alt="Great Forest Park Balloon Race (2011), in Saint Louis, Missouri, USA - balloon and two girls - Lab b" height="332" src="http://farm9.staticflickr.com/8142/7639333502_e5bb2b9ae1.jpg" width="500" /></a><br />
<br />
Good results, with an unchanged white balance, with very little effort. But what if we want to target colors other than the ones given us here? By using the Lab color space, we can have very precise control over color, as long as we are willing to do a lot of hard work, but the results are reliable and predictable. The following discussion includes extensive use of algebra, trigonometry, and geometry: proceed at your own risk.<br />
<br />
Consider this photo, taken at a graduation ceremony last year, at the University of Missouri - Saint Louis:<br />
<br />
<a href="http://www.flickr.com/photos/msabeln/7639498942/" title="UMSL graduation by msabeln, on Flickr"><img alt="UMSL graduation" height="332" src="http://farm8.staticflickr.com/7115/7639498942_fe45f5cb0b.jpg" width="500" /></a><br />
<br />
Suppose we want to convert this to a two-color image, but we want to preserve the red color on the robes of the speaker. We can do this by algebraically transforming the Lab a and b coordinates.<br />
<br />
Please consider this rather complicated diagram:<br />
<br />
<a href="http://www.flickr.com/photos/msabeln/7658013268/" title="sRGB colors in Lab chart by msabeln, on Flickr"><img alt="sRGB colors in Lab chart" height="500" src="http://farm8.staticflickr.com/7115/7658013268_6905eccd61.jpg" width="500" /></a><br />
<br />
Click on the image to examine this chart at its largest size.<br />
<br />
This shows the relative locations of sRGB colors within the Lab color space. Although Lab can represent all visible colors, the sRGB space can only display about 35% of the color range visible to the human eye, but it closely models the gamut of colors displayable on ordinary computer screens or flat-panel televisions. The colored, irregular line shows the colors where one RGB color channel is equal to its maximum value of 255, another channel is 0, and where the third channel takes on all values — therefore this shows only the brightest, most saturated colors of sRGB. The Lab color space was designed to be fairly visually uniform — that is, equal changes of Lab coordinates will produce visually equal changes in color throughout the chart — whereas sRGB was not designed to be particularly uniform across all colors<br />
<br />
Like standard artists’ <a href="http://en.wikipedia.org/wiki/Color_wheel">color wheels</a>, this image portrays colors arranged in a cyclical form — you can start at red, continuously change the hue, go through the set of common hues, and then return back to red where you started. But unlike the color wheels, this shows that the primary colors are perhaps not quite as absolute nor as uniformly spaced as we may like. (I created a color wheel specifically using the sRGB primary colors, which can be seen <a href="http://therefractedlight.blogspot.com/2011/10/visually-uniform-digital-color-wheel.html">here</a>.)<br />
<br />
When editing in Lab in Photoshop, we can set either the a or b channels to 50% gray, which will eliminate that color axis from the image. The key to our processing is to <i>rotate</i> the colors in a way that will bring our key color to lay either on the a or b Lab axis, and then to eliminate the colors on the other axis by setting it to 50% gray. We then <i>rotate</i> the colors back to where they were before.<br />
<br />
OK, back to the graduation sample image. The red value that I want to preserve has a value of R = 223, G = 51, and B = 44; the equivalent Lab color is L = 51, a = 65, and b = 48. I calculate that my color of red is located at an angle of about artangent(b/a) = or about 36 degrees above the a axis.<br />
<br />
What we do next is rotate all of the colors around — by negative 36 degrees — so that my red is now on the a axis. We use the Greek letter theta (θ) as the symbol of the amount of rotation:<br />
<br />
New a value = cosine(θ) x (old a value) - sine(θ) x (old b value)<br />
New b value = sine(θ) x (old a value) + cosine(θ) x (old b value)<br />
<br />
More information on this algebraic transformation can be found <a href="http://en.wikipedia.org/wiki/Rotation_(mathematics)">here</a>.<br />
<br />
For this example, where θ = −36 degrees, then sine(θ) = about -.59 and cosine(θ) = about .81. I move my image to the Lab color space, make a copy of the image, and then desaturate the copy, turning both the a and b channels to 50% gray. Using the Apply Image command in Photoshop, I’ll either add or subtract the a and b color channels of the old image to the channels of my new image. Here is an example of one of the Apply Image commands:<br />
<br />
<a href="http://www.flickr.com/photos/msabeln/7659241700/" title="Sample Apply Image by msabeln, on Flickr"><img alt="Sample Apply Image" height="281" src="http://farm9.staticflickr.com/8152/7659241700_d24d69265c.jpg" width="470" /></a><br />
<br />
This is a somewhat complicated procedure if you try to follow the steps in your head, but here is the result of the color rotation:<br />
<br />
<a href="http://www.flickr.com/photos/msabeln/7659106642/" title="UMSL graduation - rotated colors by msabeln, on Flickr"><img alt="UMSL graduation - rotated colors" height="332" src="http://farm9.staticflickr.com/8147/7659106642_73d7c7380b.jpg" width="500" /></a><br />
<br />
Every color was rotated by the same amount in Lab, and the red color of the robe is now equal to a = 80 and b = 0, showing us that this color is now along the a axis. Now since we are eliminating the b axis, there is no need to create a new rotated b, saving ourselves two Apply Image operations, but I thought you’d like to see all the colors here.<br />
<br />
Now we can eliminate the b axis, and then rotate the colors by the same amount, but in the opposite direction:<br />
<br />
<a href="http://www.flickr.com/photos/msabeln/7659188864/" title="UMSL graduation - preserved red color by msabeln, on Flickr"><img alt="UMSL graduation - preserved red color" height="332" src="http://farm8.staticflickr.com/7139/7659188864_a3ac4c60d6.jpg" width="500" /></a><br />
<br />
Here we have the colors rotated back to where they used to be, and the red color is preserved, within the limits of rounding errors. Here I did the same thing, but this time preserving the Ph.D. blue color on the robes of the seated scholars:<br />
<br />
<a href="http://www.flickr.com/photos/msabeln/7659189722/" title="UMSL graduation - preserving PhD Blue by msabeln, on Flickr"><img alt="UMSL graduation - preserving PhD Blue" height="332" src="http://farm9.staticflickr.com/8150/7659189722_22e70bfae1.jpg" width="500" /></a><br />
<br />
And for fun, I mixed these images together, using the “Blend If” sliders within the Layer Style box, preserving both the red and blue, but eliminating green altogether:<br />
<br />
<a href="http://www.flickr.com/photos/msabeln/7659190668/" title="UMSL graduation - preserving red and PhD Blue by msabeln, on Flickr"><img alt="UMSL graduation - preserving red and PhD Blue" height="332" src="http://farm9.staticflickr.com/8151/7659190668_a455cea317.jpg" width="500" /></a><br />
<br />
Photoshop is a bit more powerful than we would expect. Direct algebraic manipulation of images is a powerful method; undoubtably similar techniques could be used with digital cinema for interesting special effects.Mark S. Abelnhttp://www.blogger.com/profile/06692448528819277158noreply@blogger.com0tag:blogger.com,1999:blog-8768375296475349032.post-41760732504351704872012-07-17T19:28:00.001-05:002013-07-28T06:33:38.710-05:00At the Limit of Perception<span class="Apple-style-span" style="font-size: x-large;">MANY PHOTOGRAPHERS AIM FOR</span> exceptionally clean images, low in noise, and high in dynamic range. However, extreme sensor sensitivity is rarely needed for most photographs, especially if the photographer sticks to the basic rules of photography, which include the practice of using good lighting. A good, bright primary source of light, along with perhaps fill-in lights or reflectors, are typically needed to get good photographs.<br />
<br />
But consider this photograph of a <a href="http://en.wikipedia.org/wiki/Canoe">canoe</a>, taken about 45 minutes after sunset, on a moonless, starless night, illumined by the waning skylight, distant fireworks and lightning, and a lone incandescent lamp a hundred or more yards away:<br />
<br />
<a href="http://www.flickr.com/photos/msabeln/7530528702/" title="DSC_5231 by msabeln, on Flickr"><img alt="DSC_5231" height="332" src="http://farm9.staticflickr.com/8006/7530528702_737314b8d1.jpg" width="500" /></a><br />
<br />
This was an interesting scene to my eyes, but there isn’t much to see in my image — just a very faint outline of an object. You might have better luck seeing something if you click the photo twice to see it in Flickr with a dark gray background.<br />
<br />
I took this with my camera mounted on a tripod, but because I could hardly focus at all, I set the aperture to f/8 for greater depth of field, and I didn’t use a long exposure time because I didn’t want to spend 2 hours getting my photo — one hour, perhaps, for the exposure, and one hour for dark frame subtraction. Sometimes it is inconvenient or even impossible to get a good exposure, so you have to make do with what you can get. I wanted to see how good of an image I could get at the limit of the camera’s performance.<br />
<br />
<a name='more'></a><br />
<br />
Here is Photoshop’s histogram of the image:<br />
<br />
<a href="http://www.flickr.com/photos/msabeln/7530699318/" title="Histogram of JPEG version of DSC_5231 by msabeln, on Flickr"><img alt="Histogram of JPEG version of DSC_5231" height="253" src="http://farm9.staticflickr.com/8289/7530699318_feb7e9fccb_o.png" width="309" /></a><br />
<br />
Numbers of dark pixels are shown on the left hand side of the histogram, and where there any bright pixels, they would be shown in the right hand side. The heights of the colored patches show us the relative number of pixels of any given brightness, taking into account the color. We see a big spike on the left hand side of the chart, which tells us that most of the image is black. No surprise. The histogram shows that some blue, cyan, and green pixels are somewhat brighter than black, and if you examine the image of the canoe closely, you can see a faint area of brighter tones.<br />
<br />
By using the Levels tool in Photoshop, I was able to brighten the image:<br />
<br />
<a href="http://www.flickr.com/photos/msabeln/7533047642/" title="DSC_5231 Levels by msabeln, on Flickr"><img alt="DSC_5231 Levels" height="332" src="http://farm8.staticflickr.com/7256/7533047642_d7469d85e1_o.jpg" width="500" /></a><br />
<br />
We can now see the subject fairly clearly.<br />
<br />
I brightened each channel individually so that the histogram goes all the way across, just touching 255 on the right:<br />
<br />
<a href="http://www.flickr.com/photos/msabeln/7533046912/" title="Histogram of JPEG Levels version of DSC_5231 by msabeln, on Flickr"><img alt="Histogram of JPEG Levels version of DSC_5231" height="253" src="http://farm9.staticflickr.com/8287/7533046912_22530bcd97_o.png" width="309" /></a><br />
<br />
The image is particularly rough, and about 66% of all the pixels are pure black. Note the gaps between the colors on the histogram: this evidences itself in the photo by noise and a distinct lack of color range. Whereas the light from the sky was bluish, as seen on the canoe, it is sitting on a gravel bar, which was reddish, and not green at all.<br />
<br />
I shot this at my camera’s base ISO, and used the camera’s dark frame subtraction feature to minimize noise, but we have very little data to get a good image. But, even with severe underexposure, we still can get <i>some</i> image, even if it is inferior.<br />
<br />
In my experience, a large percentage of digital images are rather deficit in shadow detail — certainly mine are — and so can benefit from some shadow-brightening, either in post processing, or in-camera, by using features such as Nikon’s <a href="http://www.nikonusa.com/Learn-And-Explore/Nikon-Camera-Technology/fsqd6p6h/1/Active-D-Lighting.html">Active D-Lighting</a>.<br />
<br />
The first JPEG image above was processed from the original RAW file using the original camera settings:<br />
<ul><li>White balance = Auto (Because whatever white balance was here was not particularly obvious.)</li>
<li>Tone compensation = −2 (Leading to less contrast and better conservation of highlight and shadow detail.) </li>
<li>Saturation = 0 (Lowering digital noise as well as conserving highlight and shadow detail.)</li>
<li>ISO sensitivity = 200 (Base sensitivity of my sensor, for less noise. This may not be needed, as we shall see later.)</li>
<li>Color mode = Ia (sRGB color space, the standard used by most digital cameras, printers, and the Internet.)</li>
</ul>These are all good basic settings for ordinary daylight photography, settings that will likely produce a good photograph that might not even need any post-processing at all. But extraordinary circumstances require extraordinary camera settings and post-processing.<br />
<br />
Please consider the following steps that most digital cameras do when converting their original RAW sensor data to a JPEG image:<br />
<br />
<b>White balance</b> — that is, adjusting the image so that neutral objects appear neutral in the JPEG image — is done by multiplying the red and blue color channels by some value that depends of the color of light falling on the scene. While this is a necessary step under most conditions, this increases noise in an image, since the red and blue values are amplified by the multiplication. Under one particular daylight lighting condition, my camera will multiply the red sensor values by 2.11 and the blue values by 1.52, while the green values remain unchanged. Under the incandescent lighting in my office, the blue channel is multipled 4.4 times. Image files generally only use values from 0 to 255 for each color channels, so any intermediate values get rounded off, leading to more noise. Also, large values will get cut off if they exceed 255, which can lead to color shifts in highlights.<br />
<br />
See these articles for more information:<br />
<a href="http://therefractedlight.blogspot.com/2011/01/white-balance-part-1.html">White Balance, Part 1</a><br />
<a href="http://therefractedlight.blogspot.com/2011/09/white-balance-part-2-gray-world.html">White Balance, Part 2: The Gray World Assumption and the Retinex Theory</a><br />
<br />
<b>Color space conversion</b> — digital camera sensors don’t perceive colors as does the typical human eye, and so a conversion takes place within the camera to approximate human vision. Under the simplest method, a linear mathematical combination of the values from the sensor is used to approximate human color perception. More precisely, the color is converted to a standard color space such as sRGB, which typically is a subset of humanly visible colors. For my camera, the <a href="http://www.dxomark.com/index.php/Cameras/Camera-Sensor-Database/Nikon/D40">DxOMark website</a> states that this formula is used:<br />
<blockquote class="tr_bq"><span class="Apple-style-span" style="font-size: x-small;">sRGB red value = 1.64 x RAW red − 0.61 x RAW green − 0.02 x RAW blue</span><br />
<span class="Apple-style-span" style="font-size: x-small;">sRGB green value = −0.11 x RAW red + 1.45 x RAW green - 0.35 x RAW blue</span><br />
<span class="Apple-style-span" style="font-size: x-small;">sRGB blue value = 0.03 x RAW red - 0.34 x RAW green + 1.32 x RAW blue</span></blockquote>Please note that we have negative values here, and that negatives aren’t allowed for sRGB color numbers, and so there are plenty of plausible colors captured by the camera which get set to zero in our final JPEG image, and so we will lose detail. All of this multiplication will increase noise, and don’t forget that the white balance function increases noise also. By the way, the negative numbers found here indicate that the range of colors that can be perceived by the camera exceeds the range of colors that can be represented by the sRGB color standard.<br />
<br />
See this article for more information:<br />
<a href="http://therefractedlight.blogspot.com/2011/06/examples-of-color-mixing.html">Examples of Color Mixing</a><br />
<br />
<a href="http://en.wikipedia.org/wiki/Gamma_correction"><b>Gamma correction</b></a> is used to reassign the color tones in the image so that more values are assigned to the dark and mid tones, making the storage, display, and editing of the image more practical and convenient, as well as being a bit more in-line with human perception of relative tones. For example, a medium gray tone, something that we’d call 50% gray, actually only reflects about 12%-18% of the light falling on it. A linear image, without Gamma correction, would not allocate much data to the critical important dark and mid-tones, and so we would likely show banding in the shadows and dark colors.<br />
<br />
Gamma correction is calculated by using exponentials:<br />
<blockquote class="tr_bq">Gamma corrected red value = (sensor red value)<sup>(1/Gamma correction) </sup>with the same being done for the other color channels.</blockquote>The signal is normalized so that the value read from the sensor is divided by the maximum possible sensor value, so that the numbers here go from 0 to 1. Whenever we take an exponential of either 0 or 1, we get the same number back, and so only the mid tones are adjusted. Typically, the gamma correction value used is 2.2, which may not be perfectly in harmony with human vision, but it is good enough for most photographic work. Please note that many cameras will handle values close to zero in a somewhat different manner, but for the purposes of this discussion, the exponential component is what is important. <br />
<br />
If a particular pixel in a RAW image reads at 22% of its maximum value, then the gamma corrected value will be .22<sup>(1/2.2)</sup> = .50; so we can see clearly that mid tones get pushed higher in value. An sRGB value of (128,128,128) which appears to the eye to be close to a medium gray tone, and which is about half of the maximum value available in sRGB — which is (255,255,255) for pure white — represents a much lower reflectance in real life than its middle values indicate.<br />
<br />
<b><a href="http://en.wikipedia.org/wiki/Film_speed#Digital">ISO sensitivity adjustment</a></b> amplifies the signal coming from the sensor, giving us the option of trading off a shorter exposure time (or greater depth of field) with increased digital noise. Now, if you are going to produce a nicely exposed full-tone image in-camera, and you want great technical image quality, then by all means use the camera’s base ISO setting. But if you are doing what I’m doing here — severely underexposing an image just so that you can get <i>something</i> quickly — then using a high ISO may not increase noise at all.<br />
<br />
I have to boost the brightness of my sample image in post-processing, which is a process that increases noise. As we saw with the adjusted image above, we can brighten it but be left with a very poor image, with lots of blotches of color and with most of the image being pure black. If instead I used the same shutter speed and aperture, but boosted the ISO from 200 to 1600, I would have gotten a JPEG that would have had much more detail. Boosting ISO increases noise, but as far as I know, all cameras generate less noise with their electronic ISO adjustment than we can obtain when brightening an 8-bit JPEG image. The general rule is that if you are otherwise going to brighten an image after the fact in post processing, then by all means use the highest natural ISO that the camera can offer to get the image as close as possible to the brightness you need it to be, if you can’t brighten the image otherwise with longer shutter speeds or wider apertures.<br />
<br />
Saturation and contrast adjustments are other things done in-camera in order to produce a pleasing JPEG output. Saturation increases the vibrance of the colors, while contrast adjustment will compress the highlights and shadows to produce more distinct mid tones. Both of these techniques emphasize some details while eliminating others, and generally produce images that have less information content than an unprocessed image.<br />
<br />
<b>Noise reduction</b> is something else that digital cameras typically do with images — this process can be severe or non-existent, or selectable by a menu. Clearly, noise reduction usually destroys detail, but it can be necessary to produce an image that is tolerable. <a href="http://en.wikipedia.org/wiki/Dark-frame_subtraction"><b>Dark frame subtraction</b></a> is a kind of noise reduction that does not destroy much valid detail, but instead removes systematic or patterned noise generated by the sensor, especially during long exposures. The camera, after taking an exposure, will close the shutter and then take another exposure for the same length of time but without light falling on the sensor: this ‘dark frame’ exposure is then subtracted from the digital image, subtracting out this patterned noise. I recommend doing this if your camera supports it; the major problem is that it doubles your total exposure time.<br />
<br />
Many of the features explained above will harm an image taken at a camera’s limits of sensitivity. For this reason, by shooting RAW, my camera produces a file without most of these adjustments. Now, Nikon’s View NX2 or Capture NX2 software will do this processing after the fact, and Adobe Camera RAW and other software packages do similar processing. But for my purpose of squeezing the most possible out of a severely underexposed image, I will use <a href="http://www.raw-photo-processor.com/">Raw Photo Processor</a> (RPP), a Mac-only RAW converter that gives me far more control over the processing.<br />
<br />
Using RPP, I process the image in a minimalistic way:<br />
<br />
<a href="http://www.flickr.com/photos/msabeln/7592495358/" title="RPP by msabeln, on Flickr"><img alt="RPP" height="273" src="http://farm8.staticflickr.com/7117/7592495358_370cfa7afe.jpg" width="500" /></a><br />
<br />
Click the image to see it in a larger size.<br />
<br />
The critical settings here override many of the standard image processes found in digital cameras:<br />
<ul><li><i>UniWB</i> gives us the color channels unmodified for white balance. This shows us the native white balance of the camera.</li>
<li><i>Raw RGB TIFF 32-bit</i> does not convert the image to any color space, but simply gives us the RAW channels. This software also gives us the ability to use 16 bits, but I find it insufficient to squeeze out all of the image data. Certainly 8 bits would <i>not</i> suffice to give us good total detail.</li>
<li><i>Gamma 1.0</i> overrides the gamma correction function. Since I’m brightening the image, gamma correction ought to come after the brightening: doing so beforehand will alter the tonality of the image.</li>
</ul>I brought the image into Photoshop, and since the CS5 version of that product has limited support for 32 bit files, I immediately converted it to 16 bits using the <i>Exposure and Gamma</i> method, with the settings of Exposure of +6.6 stops and Gamma of 1. This gives me a brightness that is the equivalent of a shutter speed 97 times longer than what I used. This step could have been done in RPP, but I do in this way because sometimes I use other methods of brightening in Photoshop.<br />
<br />
<a href="http://www.flickr.com/photos/msabeln/7593127458/" title="DSC_5231 - from RAW - no WB by msabeln, on Flickr"><img alt="DSC_5231 - from RAW - no WB" height="331" src="http://farm8.staticflickr.com/7112/7593127458_bc018205ac_o.jpg" width="500" /></a><br />
<br />
Not too bad, especially considering the earlier job of brightening that I did before. This is not white balanced, and as mentioned, the RAW processor did not convert the camera’s color space to approximate human vision. 22.5% of the pixels are black, but at least my main subject — the canoe — is clearly visible, showing that we managed to squeeze a few more bits of data from the camera.<br />
<br />
This is a noisy image, as we should expect, but we ought to be able to get a fairly clean monochrome image out of this. If we examine the three color channels, we can see that some are noisier than others:<br />
<br />
<a href="http://www.flickr.com/photos/msabeln/7593212750/" title="DSC_5231 - from RAW - channel detail by msabeln, on Flickr"><img alt="DSC_5231 - from RAW - channel detail" height="993" src="http://farm8.staticflickr.com/7267/7593212750_ba76f2cbaa_o.jpg" width="500" /></a><br />
<br />
From top to bottom, we see the same patch of the canoe, at 100% magnification, for the red, green, and blue channels. Clearly the red channel is noisier than the other two, and if we do a white balance on this gray colored canoe, the red channel would have to be amplified greatly, increasing noise greatly. The green and blue channels seem to be close in the amount of noise.<br />
<br />
For a monochrome version of this photograph, I’d discard the red channel, and mix together the green and blue:<br />
<br />
<a href="http://www.flickr.com/photos/msabeln/7593259422/" title="DSC_5231 - from RAW - monochrome using green and blue channels by msabeln, on Flickr"><img alt="DSC_5231 - from RAW - monochrome using green and blue channels" height="331" src="http://farm8.staticflickr.com/7137/7593259422_06978c1c53_o.jpg" width="500" /></a><br />
<br />
This image is still noisy, but who would have ever expected that we could get a photograph this good from an image that was underexposed by 6 and 2/3<sup>rds</sup> stops?<br />
<br />
Now, I’d like to get a color image. This is problematic because of the extreme noise found in the red color channel, and none of my noise reduction methods was able to reduce to something tractable — the noise was highly speckled, and using an adequately large radius of blurring on that channel to remove the speckle caused lots of color bleed from the gravel to the boat.<br />
<br />
<a href="http://www.flickr.com/photos/msabeln/7593890976/" title="DSC_5231 - noise in white balance by msabeln, on Flickr"><img alt="DSC_5231 - noise in white balance" height="331" src="http://farm9.staticflickr.com/8156/7593890976_cedfe64890_o.jpg" width="500" /></a><br />
<br />
Much of the noise we have is from the red channel, so we should expect color shifts between red and its opponent color cyan, as we see in this image. <br />
<br />
Now, I know that the canoe is gray, while the gravel is reddish brown, and so I constructed a plausible red channel from the blue and green channels, and I made it a bit brighter so as to bring out the red of the gravel. I did severe noise reduction on these colors, and then I adjusted the white balance so that canoe is neutral in color. As a general rule of thumb, in a white balanced image, the red channel will be the lightest in overall tonality, while the blue channel will be the darkest, so this observation can be used to good effect when doing channel replacement.<br />
<br />
<a href="http://www.flickr.com/photos/msabeln/7593825244/" title="DSC_5231 - blurred colors by msabeln, on Flickr"><img alt="DSC_5231 - blurred colors" height="331" src="http://farm9.staticflickr.com/8021/7593825244_838a845975_o.jpg" width="500" /></a><br />
<br />
Luminance is more important than color, and so even though this color image is very blurred and rough, it is adequate in its color content. Putting this image as a layer on top of the good monochrome image, and setting the blending mode to ‘color,’ I was able to produce a plausibly colored image.<br />
<br />
<a href="http://www.flickr.com/photos/msabeln/7593826702/" title="DSC_5231 - from RAW - color by msabeln, on Flickr"><img alt="DSC_5231 - from RAW - color" height="331" src="http://farm8.staticflickr.com/7132/7593826702_183180e473_o.jpg" width="500" /></a><br />
<br />
This is not too bad, considering that the image started out looking like this:<br />
<br />
<a href="http://www.flickr.com/photos/msabeln/7530528702/" title="DSC_5231 by msabeln, on Flickr"><img alt="DSC_5231" height="332" src="http://farm9.staticflickr.com/8006/7530528702_a27f7cb47e_o.jpg" width="500" /></a>Mark S. Abelnhttp://www.blogger.com/profile/06692448528819277158noreply@blogger.com0tag:blogger.com,1999:blog-8768375296475349032.post-77543199092125379582012-07-11T15:13:00.000-05:002012-07-11T15:13:32.584-05:00“Your Central Visual Field”<table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: left; margin-right: 1em; text-align: left;"><tbody>
<tr><td style="text-align: center;"><a href="http://imgs.xkcd.com/comics/visual_field.png" imageanchor="1" style="clear: left; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" height="290" src="http://imgs.xkcd.com/comics/visual_field.png" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><span class="Apple-style-span" style="font-size: small;"><a href="http://xkcd.com/1080/">http://xkcd.com/1080/</a></span></td></tr>
</tbody></table>
<br />
Click the <a href="http://xkcd.com/1080/">link</a> for a larger image.Mark S. Abelnhttp://www.blogger.com/profile/06692448528819277158noreply@blogger.com0tag:blogger.com,1999:blog-8768375296475349032.post-48012709238123787932012-06-17T19:25:00.005-05:002012-06-17T19:25:50.859-05:00Giving Credit...<span class="Apple-style-span" style="font-size: x-large;">...WHERE CREDIT</span> is due.<br />
<br />
Recently I wrote about landscape photography, in the article <a href="http://therefractedlight.blogspot.com/2012/06/composition-part-2-composition-and.html">Composition, Part 2 - Composition and Subject in Landscape Photography</a>. But I failed to mention someone who helped me out tremendously, not only by often driving me around to locations, and providing moral support and encouragement, but also by pointing out good camera positions. I owe a lot to Tina, whose photos can be found at <a href="http://snupsphotos.blogspot.com/">http://snupsphotos.blogspot.com</a>.<br />
<br />
<br />Mark S. Abelnhttp://www.blogger.com/profile/06692448528819277158noreply@blogger.com1tag:blogger.com,1999:blog-8768375296475349032.post-28607828140463806422012-06-17T09:50:00.000-05:002012-06-17T19:27:36.614-05:00Composition, Part 2 - Composition and Subject in Landscape Photography<span class="Apple-style-span" style="font-size: x-large;">A WHILE BACK,</span> I got a somewhat difficult assignment: I was to photograph a considerable number of city parks for a coffee table photo book. While I liked my architectural photos, I’d always been rather disappointed with my landscapes, as I mentioned in an earlier article, <a href="http://therefractedlight.blogspot.com/2012/01/composition-part-1-frame.html">Composition, Part 1 - the Frame</a>. My publisher, Reedy Press, must have thought I was up to the task, even though I was uncertain. But with a year to study, experiment, and shoot, I was able to successfully produce many good photos. Certainly I’m no master of the subject, but I think it might be useful to share some of what I learned while shooting this book.<br />
<br />
<a href="http://www.flickr.com/photos/msabeln/7098797917/" title="St Louis Parks cover_high by msabeln, on Flickr"><img alt="St Louis Parks cover_high" height="386" src="http://farm8.staticflickr.com/7189/7098797917_afb5c0c83d.jpg" width="500" /></a><br />
<br />
The final book, <a href="http://therefractedlight.blogspot.com/2012/04/st-louis-parks.html"><i>St. Louis Parks</i></a>, turned out well, and it is well-recieved by the public. Please click <a href="http://therefractedlight.blogspot.com/2012/04/st-louis-parks.html">here</a> if you would like to purchase a copy, autographed by me.<br />
<br />
My publisher selected the photo above for the cover of the book, and I generally like it. It isn’t perfect — the sky appears to have a slight greenish tone, especially when seen under fluorescent illumination (although it is correctly white-balanced, and I didn’t alter the hue in post-processing), and the image is a bit darker than I’d like. The formal symmetry, with the fountain and building centered with each other and with the frame, is pleasing to me, but it is slightly off — although this is offset by the presence of the spine, not seen here, on the left hand of the book. What makes the photo, I think, is the presence of teenagers enjoying the fountain; having human subjects in a landscape photo is often appealing. The photo is technically OK, has a good subject, and is composed adequately, making it, in the opinion of my publisher, good enough to be on the cover of a book.<br />
<br />
Generally speaking, there is a certain lightness of spirit or relief you can get when you leave certain decisions to others — were I to have selected the photos for the book, I think I would have agonized too much over them, seeing little else than flaws. Instead, my publisher selected images that he thought had general appeal, and he usually selected my favorites. Artists are often not the best judges of their works. Getting a sense of what is good takes understanding, time, and experience, as well as receiving the good judgement of others.<br />
<br />
The first step towards getting better in photography, or any art, I think, is to understand <i>why</i> your works are disappointing, and understanding what makes good images superior. This can be exceptionally difficult, for oftentimes it is hard to put vague feelings into words. Determining what actions to take can be difficult also, for it requires an understanding of the technology. For example, you may find that your photographs are too yellow, but you have to understand color theory in order to know that you must make the photos <i>more blue</i> to cancel out the yellow, and you have to understand manual white balance on the camera, or the use of post-processing on the computer to correct for this flaw.<br />
<a name='more'></a><span class="Apple-style-span" style="font-size: 19px; font-weight: bold;">A Lofty Target</span><br />
<br />
It is easy to become inspired by awesome works of art, which can give you the desire to be able to do the same. When I started work on this project, I examined many highly-regarded landscape photos — and <a href="http://en.wikipedia.org/wiki/Landscape_art">landscape paintings</a> — and sought out good advice. <i>What</i> do the best landscape photographers do? <i>How</i> can I do the same?<br />
<br />
<a href="http://photo.net/gallery/photocritique/filter?period=5000&rank_by=sum&category=Landscape&store_prefs_p=0">Click here</a> for some highly-rated images. Do you find those images good? If so, what in your opinion makes them good? If you don’t like them, what landscape photos do you find appealing? Why?<br />
<br />
This kind of inspiration is a pull from above, and so serves as a worthy goal. But it can be simultaneously fruitless and discouraging if you don’t have the right attitude. If your target is so lofty, and you never come even close to hitting the target, do you want to give up?<br />
<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhFYnSopxdMSb1vUTGSecjj8u840JqdJMESaIC5dFDEE9wJaEj0ZeY68SzeKVd3DDEgEac83QsdrdtJJuJCj8ul2fv-ie6oCvFLZduktYJNB4oY_qdtfLzpMevJkPl6xg1oLnz_KtEnNG8/s1600/Adams_The_Tetons_and_the_Snake_River.jpg"><img border="0" height="400" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhFYnSopxdMSb1vUTGSecjj8u840JqdJMESaIC5dFDEE9wJaEj0ZeY68SzeKVd3DDEgEac83QsdrdtJJuJCj8ul2fv-ie6oCvFLZduktYJNB4oY_qdtfLzpMevJkPl6xg1oLnz_KtEnNG8/s640/Adams_The_Tetons_and_the_Snake_River.jpg" width="500" /></a><br />
<br />
<i>Ansel Adams, </i>The Tetons and the Snake River <i>(1942) Grand Teton National Park, Wyoming. National Archives and Records Administration, Records of the National Park Service. (79-AAG-1)</i> [<a href="http://en.wikipedia.org/wiki/File:Adams_The_Tetons_and_the_Snake_River.jpg">source</a>]<br />
<br />
I see two problems.<br />
<br />
One is the desire to “<b>fill Ansel Adams’ tripod holes.</b>” This desire to duplicate that one Great Photograph is often present with ardent photographers who are more than beginners, who may be somewhat knowledgeable, but who may take a dilettantish view of the subject. While perhaps this is a worthy exercise for some, it leads to a repetition of photographs of the same subject from the same location, and so can turn an awesome subject into something dull or overly familiar. Typical landscape subjects, duplicated extensively, include Adams’ <a href="http://www.google.com/search?q=half+dome&hl=en&prmd=imvns&source=lnms&tbm=isch&ei=r6PLT4HmEYiA2gWS4ZzaCw&sa=X&oi=mode_link&ct=mode&cd=2&ved=0CFYQ_AUoAQ&biw=1678&bih=944">Half Dome</a>, and photos of <a href="http://www.google.com/search?q=delicate+arch&hl=en&client=safari&rls=en&prmd=imvns&source=lnms&tbm=isch&ei=G6PLT-rROaGK2gXo9fTZCw&sa=X&oi=mode_link&ct=mode&cd=2&ved=0CFgQ_AUoAQ&biw=1678&bih=944">Delicate Arch</a> and <a href="http://www.flickr.com/groups/mountfuji/pool/">Mount Fuji</a>, and we can include numerous other tourist attractions, such as the <a href="http://www.flickr.com/search/?q=gateway+arch">Gateway Arch</a>.<br />
<br />
But this trivializes the effort of the original photographer; Adams’ image of Half Dome is much more than simply location and subject, for time of day and year, and the weather were important, as well as his extensive darkroom work.<br />
<br />
This attitude also leaves out room for improvement. Perhaps Adams was tired, and placed his tripod simply because his feet hurt, it was getting late in the day, and he didn’t want to walk any further. Perhaps there is a much better location which will never be discovered because of Adams’ precedent. We also have new technology, barely dreamed of in those days. What are the possibilities today?<br />
<br />
But you should ask yourself: why spend the money, time, and trouble to travel to Yosemite Valley? Aren’t there worthy subjects close to home? If you are being paid to travel to remote locales, then certainly you can explore the world photographically. But in general,<i> if you want to be a good landscape photographer, you have to take lots of photos of landscapes,</i> and you can’t expect to pack all of your experience into a two-week vacation. Are there undiscovered landscapes near your home, ones that are destined to be featured in a future Great Photograph? Perhaps you can photograph what you know and love — perhaps, by being a local, you can see some detail that all the tourists miss. Never forget that what is commonplace to you is exotic and unusual to most of humanity.<br />
<br />
<a href="http://www.flickr.com/photos/msabeln/3078972583/" title="Forest 44 Conservation Area, near Valley Park, Missouri, USA - open field at dusk by msabeln, on Flickr"><img alt="Forest 44 Conservation Area, near Valley Park, Missouri, USA - open field at dusk" height="332" src="http://farm4.staticflickr.com/3034/3078972583_f5ccc68aeb.jpg" width="500" /></a><br />
<br />
<i>I live not too far from Forest 44 Conservation Area, which is near Valley Park, Missouri; I’ve long thought that some good landscape photography can be had here, and am willing to keep trying to get the good shot.</i><br />
<br />
I used the example of Ansel Adams (1902-1984) here simply because he is the only photographer that most people know by name, and because beginners will often say that they are inspired by his work. But there are many other great photographs and great photographers (and let’s not forget about painting) — perhaps we ought to do research and broaden our horizons in finding inspiration. <a href="http://en.wikipedia.org/wiki/List_of_photographers">Click here for a list of notable photographers</a>. <a href="http://en.wikipedia.org/wiki/Lists_of_painters">Click here to find notable painters</a>.<br />
<br />
Another problem is similar: people want to use the same gear used by the Great Photographer. “<i>Did Adams shoot Canon or Nikon?</i>” people actually ask. Clearly this can become a great expense, and can be fruitless and counterproductive if you don’t know how to use your gear for best effect. I mainly shot <i>St. Louis Parks</i> with what is now a rather antiquated digital camera with ordinary optics, and I’ve already completed the principal photography for my next two books with the same equipment: “<i>But,</i>” I say to myself, “<i>my next projects will be taken with a <a href="http://www.amazon.com/gp/product/B005OL2ID2/ref=as_li_ss_tl?ie=UTF8&tag=romeofthewest-20&linkCode=as2&camp=1789&creative=390957&creativeASIN=B005OL2ID2">Nikon D800E</a>. I also need to upgrade my lenses.</i>” Yes, an upgrade would be very <i>nice</i>, but my results are certainly <i>adequate</i> for my present needs. Maybe I’m pushing the limit of optical quality (especially on two-page spreads), and maybe a D5100 (much cheaper than the D800E) is more in line with what I really need. But chasing gear — even by a professional — can turn photography into an expensive collectors’ hobby, and perhaps the money can be more wisely spent.<br />
<br />
<a href="http://www.flickr.com/photos/msabeln/7284781384/" title="Wilmore - trees at sunset by msabeln, on Flickr"><img alt="Wilmore - trees at sunset" height="375" src="http://farm8.staticflickr.com/7075/7284781384_fa2ca691bf.jpg" width="500" /></a><br />
<br />
<i>This photo from the book gets lots of positive comments. I took it with a cheap point-and-shoot camera that you can buy used for about $200. These trees would have made an ordinary snapshot if not for the late afternoon light.</i><br />
<br />
There are a few things that I’ve learned from studying highly-regarded landscapes, both paintings and photos, and nearly all the best landscapes have these things in common:<br />
<ol>
<li>A good landscape is either of an <a href="http://www.romeofthewest.com/2011/08/on-sublime.html">epic subject</a>, or it is of a beautiful subject, or both.</li>
<li>Ordinary landscapes can be good if there is significant use of unusual camera work, such as good use of a wide angle lens, effective supplemental lighting, an unusual camera position, or other uncommon things of interest.</li>
<li>Good landscapes are taken around sunset or sunrise, and sometimes at night, and almost never during midday.</li>
<li>Good landscapes often depict unusual weather, such fog or snow, or impressive clouds, or dramatic lighting.</li>
<li>Monochrome landscapes typically have a full range of tonalities; color landscapes typically have distinctive color, at least in detail.</li>
<li>A good landscapes has an essential unity, a clear subject, and the composition harmonizes well with the subject, enhancing and directing attention towards it. Good landscapes are considerably more abstract than ordinary snapshots.</li>
<li>Including a human in the image can transform the meaning and impression of the image dramatically, often for the better.</li>
<li>Good landscapes are almost always made with good equipment and good technique. Rarely do you see a good landscape with obvious technical mistakes or bad image quality, as you will often see with excellent photojournalistic images.</li>
</ol>
Now while I tried to keep these observations in mind while shooting, I can’t claim to have been fully successful. I’m still learning and practicing.<br />
<br />
As I mentioned, while a lofty target can be a great inspiration, you have to examine yourself and understand your limits and your capabilities. If you are 5 foot tall or 55 years old, a dream of becoming a professional basketball player is unrealistic; it is even unrealistic for the vast majority of prime, top-rated young athletes. Don’t have extraordinary expectations unless you have a reason for your hope. Perhaps there is something else you can do well? What are your talents?<br />
<br />
<br />
<div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;">
Please see the article <a href="http://therefractedlight.blogspot.com/2012/04/good-photograph.html">A Good Photograph</a> for additional lofty meditations.</div>
<br />
<h3>
Flee from Evil</h3>
As I mentioned, finding inspiration in the best work done by the masters is important, but can be discouraging, since the target is so high. Another, perhaps easier, approach is to identify errors or problems in your photos, and then learn how to eliminate them. Avoiding bad photography is a push from below, and has the advantage of being easy to learn. Some dislike this negative approach, full of “thou shalt nots,” but it helps if we both can identify the good and how to avoid evil.<br />
<br />
<a href="http://www.flickr.com/photos/msabeln/7284917712/" title="PICT0098 by msabeln, on Flickr"><img alt="PICT0098" height="375" src="http://farm8.staticflickr.com/7211/7284917712_17d53dbd97.jpg" width="500" /></a><br />
<br />
<i>It was not my intention to set the focus on the branches seen here on the right, but that is what my camera’s automatic focus selected. That camera was nearly impossible to manually focus. My current camera (and many DSLRs) aren’t too good for manual focus, which is a shame.</i><br />
<br />
Taking this bottom-up approach means that you first have to understand your gear and the basic principles of photography — dull things, perhaps, far removed from what many people call Art — but you can’t take a photograph unless you have a camera, and you usually can’t take a good photograph unless you know how to use your camera.<br />
<br />
However, I recommend that a beginner take a multi-pronged approach to photographic technique, working on improving in parallel the choice of subject, camera settings, composition, and general appeal.<br />
<br />
<a href="http://www.flickr.com/photos/msabeln/7284964422/" title="PICT0118 by msabeln, on Flickr"><img alt="PICT0118" height="375" src="http://farm8.staticflickr.com/7085/7284964422_063f78f5e3.jpg" width="500" /></a><br />
<br />
This is a particularly bad photo, taken under dull skies, with a huge pile of dirt on the right, a number of barely visible barges, and lots of dark muddy detail in the middle. What is the subject of this photo? What did I want to depict? If I wanted to depict a pile of dirt, why did I not depict it boldly and purposefully? Or why didn’t I zoom in to the barges? Or, rather, should I leave my camera home when the skies are overcast and uninteresting? Also, I might add that digital cameras often render shadows poorly, as seen here, making post processing or supplemental lighting almost a requirement for good photography.<br />
<br />
<a href="http://www.flickr.com/photos/msabeln/7285045316/" title="Bellerive - view of tugboat from park by msabeln, on Flickr"><img alt="Bellerive - view of tugboat from park" height="375" src="http://farm8.staticflickr.com/7097/7285045316_3fe2494644.jpg" width="500" /></a><br />
<br />
My publisher considered using this photo. While the stray grasses at center bottom are distracting, and the trees are in the way, this photo still has a good subject with human interest. I specifically zoomed in to get one major subject, the boat, and the two onlookers on the left and right, without huge expanses of uninteresting detail. Not great, but it is an improvement over the previous photo.<br />
<br />
Far more interesting (despite the tree branches) is the following photograph of tugboats, also on the Mississippi River:<br />
<br />
<a href="http://www.flickr.com/photos/msabeln/5915492132/" title="Cliff Cave County Park, in Mehlville, Missouri, USA - two tugboats on the Mississippi River by msabeln, on Flickr"><img alt="Cliff Cave County Park, in Mehlville, Missouri, USA - two tugboats on the Mississippi River" height="441" src="http://farm7.staticflickr.com/6049/5915492132_a6d58e60f9.jpg" width="500" /></a><br />
<br />
But since this photo was not taken from one of my assigned parks, it could not go into the book.<br />
<br />
Here are some mistakes that you ought to avoid:<br />
<ul>
<li>Bad focus. Be sure you focus on your subject. Automatic focus will often select the <i>closest</i> object, which will often make a distant object blurry. For landscapes, you can hardly go wrong if you focus on the most distant significant detail and use a good aperture. Be aware that the larger the sensor on your camera, the larger the chance that you will have bad focus because of narrowed depth of field.</li>
<li>No clear subject. What, specifically, is important in your photo? What catches your attention in the scene? What is distracting? Are you sure that your photograph will capture the scene in a way that will show the interesting subject clearly?</li>
<li>Using a wide angle lens to ‘get the whole scene.’ Is the whole scene interesting? Be aware that a wide angle lens will make distant detail look smaller, while emphasizing the foreground: is this what you want? Ultra wide angle lenses will distort the edges; do you intend that?</li>
<li>Bad framing. How much sky is in your photo? Is the sky interesting? Does too much space surround your subject? Will getting closer, or using a telephoto lens help? Is your subject nicely placed within the frame? Does your camera position and framing harmonize with the symmetry of the subject?</li>
<li>Clutter. Tree trunks and branches, randomly scattered across your image, can detract from your photo of a forest. Rather, seek out scenes that are simpler, cleaner, more ordered, and more uniform. A certain amount of abstraction will often make a better image.</li>
<li>Softness. Landscapes are usually enhanced with crisp detail. Softness due to poor optics or technique is often disappointing. </li>
<li>Bad exposure. Is significant detail lost because of over or underexposure? Be aware that having even one of the three color channels overexposed will cause a shift in color. See the article <a href="http://therefractedlight.blogspot.com/2010/06/three-opportunities-for-overexposure.html">Three Opportunities for Overexposure</a> for details. Better lighting, high dynamic range techniques, or significant post-processing on the computer may be needed to correct for this.</li>
<li>A tilted camera. Do you intend the horizon line to be crooked? With wide angle lenses, be aware that there may be a significant perspective distortion on vertical lines if the camera is not held level. </li>
<li>Bad white balance. Most cameras do a good job when taking photos in broad daylight, but photos in the shade may turn out blue, or photos at sunset may have a disappointing lack of golden color. Be aware of the limits of automatic white balance, and override the function manually to get the right color. See the article <a href="http://therefractedlight.blogspot.com/2011/01/white-balance-part-1.html">White Balance, Part 1</a> for more details.</li>
<li>Bad color. Are the colors captured by the camera dull and lifeless? If so, would your image look good in monochrome? Either post-processing, choosing a better time of day, or a more interesting weather condition may help.</li>
<li>Bad lighting. Do shadows enhance your subject, or do they distract from it? Does the soft uniform lighting from an overcast sky improve your image, or does the white featureless sky distract from it? Does dappled sunlight enhance your image, or does it make the scene unintelligible?</li>
<li>Shooting into the sun. Do you intend to have lens flare and other optical artifacts in your image? Do you intend to have a black, featureless foreground, or an overexposed sky?</li>
<li>Not knowing your limitations. Most cameras at this time do not have good low-light performance: taking a photograph at dusk, without a tripod, may give you photos that have excessive noise or blur due to camera shake. Does your knowledge of theory outstrip your experience, leading to misguided photographic choices? Is your theoretical knowledge weak, leading you to have a borderline superstitious understanding of your camera? Are you seeking outside help to improve your photography? Do you shoot with a friend or with a group? Do you seek critique of your photos?</li>
<li>Chasing gear. Do you think that better camera gear will automatically get you better photos without any effort on your part? Is there a better use for your money? Squeezing the last bit of quality from inferior camera equipment can be an excellent experience, and not just a frustration.</li>
<li>Laziness. Do you intend to take superior photographs, but aren’t willing to make an extra effort to get them? Are you willing to get up earlier than is customary for you? Are you willing to wait until the lighting is better? Are you willing to be somewhat uncomfortable when finding a better place to take a photograph?</li>
<li>Not knowing your audience. Are you shooting for a family album, for a popular book, or for a curated art gallery? Your images may vary depending on your audience expectations.</li>
</ul>
<div>
Keeping these mistakes in mind can help you become a better photographer. But many of these things can be a good choice if you know that it will make for a superior photograph. Ultimately, it is the wisdom that comes from knowledge, experience, and inspiration that will help.<br />
<br /></div>
<h3>
Some Examples</h3>
<a href="http://www.flickr.com/photos/msabeln/7285114744/" title="DSC_0345 by msabeln, on Flickr"><img alt="DSC_0345" height="333" src="http://farm8.staticflickr.com/7222/7285114744_aedb963b99.jpg" width="500" /></a><br />
<br />
I was provided with a list of parks to photograph. Sometimes, an assigned park would be only a rectangle of grass, perhaps surround by a fence. How can I take an appealing photograph of that? But I had a job to do. Driving around this park, I saw something in this row of bushes, it caught my eye, although this photo was disappointing.<br />
<br />
My camera’s automatic white balance feature subtracted out the color of the sun, low in the sky. Like photos taken by most digital cameras, both highlight and shadow detail are flattened and dull. This required more than good camera work, instead, I had to bring out this detail in post processing.<br />
<br />
<a href="http://www.flickr.com/photos/msabeln/7285114418/" title="Jackson - snow by msabeln, on Flickr"><img alt="Jackson - snow" height="358" src="http://farm8.staticflickr.com/7095/7285114418_91be8fa679.jpg" width="500" /></a><br />
<br />
Here the color of the sky is restored, and we can see texture there, as well as in the snow with its distinct animal tracks. We can even see some texture in the dark bushes, and the chain link mesh of the fence is more distinct. With cropping, we have a composed row of four bushes that harmonize with the fence. I knew that there was an interesting photo to be found here.<br />
<br />
<a href="http://www.flickr.com/photos/msabeln/7285176438/" title="DSC_0098 by msabeln, on Flickr"><img alt="DSC_0098" height="333" src="http://farm8.staticflickr.com/7080/7285176438_0591ff930c.jpg" width="500" /></a><br />
<br />
Saint Louis gets snow frequently in winter, but it typically does not get deep and it melts quickly. One night, snowfall was heavy and I stayed out until 4 in the morning taking photos to document the unusual weather. The light in the city was an orange color due to street lighting reflecting off the large falling flakes, and it was so bright that I was able to hand-hold the camera easily. I liked this bridge, and even took nine photos of it from this position, but the photos are slightly disappointing. A different camera position improves the subject:<br />
<br />
<a href="http://www.flickr.com/photos/msabeln/5374100345/" title="Carondelet Park, in Saint Louis, Missouri, USA - railroad track and bridges, at night, in the snow by msabeln, on Flickr"><img alt="Carondelet Park, in Saint Louis, Missouri, USA - railroad track and bridges, at night, in the snow" height="333" src="http://farm6.staticflickr.com/5046/5374100345_ecfe519710.jpg" width="500" /></a><br />
<br />
Railroad tracks — and footprints — lead into the distance, while the bridge acts as a frame for another bridge. While many Saint Louisians are familiar with this park, undoubtably some have seen this bridge from this angle, but not under these weather conditions. A possibly ordinary subject becomes interesting.<br />
<br />
A number of photos from that night made it into the book, because the color was so strong and the weather and time of day so unusual. That night made common things uncommon.<br />
<br />
Generally speaking, the best landscape photographers are willing to get wet, muddy, and hike the extra mile in order to get their shot, which is something that I try to remind myself when I’d rather be at home in the air-conditioning, sitting in front of my computer.<br />
<br />
Taking numerous shots of the same subject, and examining their strengths and flaws, can improve your photography:<br />
<br />
<a href="http://www.flickr.com/photos/msabeln/7285301084/" title="DSCF6794 by msabeln, on Flickr"><img alt="DSCF6794" height="375" src="http://farm8.staticflickr.com/7239/7285301084_d8207c8f77.jpg" width="500" /></a><br />
<br />
Here we have the same scene as found on the book cover. I’ve taken many photos of it over time, and another view of this taken at night found its way into the book. Here I attempted to align the fountain precisely with building in the background, but I didn’t bother to keep the camera horizontal. The image is underexposed, even though that particular camera has little dynamic range to spare.<br />
<br />
Note that I took it with a wide angle lens. Many beginning landscape photographers think that they need a wide, or ultra wide angle lens, in order to ‘get the whole scene in,’ and undoubtedly I thought that I needed to set the zoom to wide for the same reason. But look at how much of the image is featureless! The sky and the water here add nothing to the photo, it is inessential detail, as is much of the green grass. For this reason, it is commonly advised that you ought to ‘fill the frame’ with your subject — get close with either your zoom or your feet. (I might add that you probably ought to leave a little room around your subject for cropping.)<br />
<br />
Please compare the relative sizes of the building and waterfall in both images. The cover photo was taken from a greater distance with a telephoto lens, and so the building and waterfall appear to be closer together; because the angle of view of the lens is smaller, we have less inessential detail of the sky, grass, and water. In the cover photo, the narrow strip of water serves to frame the main subject and act as a transition to the edge of the cover, while the relatively featureless sky is not wasted space, for in it we find the names of the authors.<br />
<br />
Finally, notice that the two figures on the right are walking <i>away</i> from the fountain, and contrast that with the youth on the book cover, who appear to be enjoying the fountain. The eye is drawn towards humans, and if the humans in the photo appear to be uninterested in the scene, what does that tell you about your subject? I’ve seen some excellent landscapes where we are shown a human figure, looking at the scene with the same awe that the photographer or painter would want us to feel.<br />
<br />
I had taken many photos of that waterfall over a period of time, and many of those photos were taken with considerable care and scene analysis. But on the day I was taking the cover photo, which was a Thursday, I undoubtably was busy doing other things, and I wasn’t out taking photos, but I did have my camera with me. As I approached the waterfall, I noticed the youth at play, and thinking that this was interesting, I took three photos as I was getting close. I knew I would have the right shot when I was centered on the waterfall: I took that final photo and then left. Within less than three minutes, without much forethought, with little setup, with just one photo, and under non-optimal conditions, I got the prime shot. I knew that it would be good. <i>Experience helps</i>.<br />
<br />
Here is a particularly bad example:<br />
<br />
<a href="http://www.flickr.com/photos/msabeln/7355513020/" title="Ruins at Welsh Spring, on the Current River, Missouri by msabeln, on Flickr"><img alt="Ruins at Welsh Spring, on the Current River, Missouri" height="500" src="http://farm8.staticflickr.com/7084/7355513020_4ab5cbd2c9.jpg" width="333" /></a><br />
<br />
These old ruins are an interesting subject, but this photo is typically of innumerable bad landscape photos. What does my photo show? Clearly, there are the remains of some sort of structure here, but the vegetation and especially the dappled lighting makes this a confusing and unsatisfactory image.<br />
<br />
I usually find overcast days poor for landscapes, but the diffuse sky lighting might be a benefit here. Zooming in towards the door to the cave might also help isolate an interesting subject. Also, lack of vegetation in the wintertime might benefit by decluttering the scene.<br />
<br />
I’m mainly an architectural photographer, and I usually find people in my photos to be a distraction from the purity of the form of the building. But armed with this attitude, the results of my first day of shooting were dismal:<br />
<div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;">
<br />
<a href="http://www.flickr.com/photos/msabeln/7323735102/" title="DSC_9960 by msabeln, on Flickr"><img alt="DSC_9960" height="500" src="http://farm8.staticflickr.com/7214/7323735102_5aae283b25.jpg" width="333" /></a><br />
<br />
From then on, I attempted to include people whenever possible:<br />
<br /></div>
<a href="http://www.flickr.com/photos/msabeln/6987677477/" title="Tower Grove Park, in Saint Louis, Missouri, USA - ruins by msabeln, on Flickr"><img alt="Tower Grove Park, in Saint Louis, Missouri, USA - ruins" height="500" src="http://farm8.staticflickr.com/7038/6987677477_97352f08e5.jpg" width="375" /></a><br />
<br />
This was taken near sunset, and the park was illumined by huge clouds in the sky, reflecting orange light from the setting sun. More photos from that evening can be seen <a href="http://www.romeofthewest.com/2012/03/tower-grove-park-at-sunset.html">here</a>. I like those photos for several reasons, including the color of the light, the attractiveness of the park architecture, and the presence of people, waterfowl, and flowers. I also used camera work, I think, to good advantage — I shot these with a telephoto lens at wide aperture, and fixed the camera white balance to Daylight so as to retain the color of the lighting. This particular photo has a number of flaws, and I think the tree in the middle foreground is unfortunate, although the scene is still somewhat interesting.<br />
<br />
It’s far too easy to take a dull photo:<br />
<br />
<a href="http://www.flickr.com/photos/msabeln/7323412106/" title="DSC_9984 by msabeln, on Flickr"><img alt="DSC_9984" height="333" src="http://farm8.staticflickr.com/7102/7323412106_df454c5273.jpg" width="500" /></a><br />
<br />
<i>Dull, dull, dull. Being bored is not only a waste of time, it can also be <a href="http://www.youtube.com/watch?v=XViXch8BuT4">dangerous</a>.</i><br />
<br />
Dull photographs are often weak in texture and color. For this reason, many photographers tend to oversaturate their photographs, boosting the color beyond what is reasonable. I have mixed thoughts about this practice; for sure, digital images lack the contrast, texture, and color range found in real life, and so increased saturation is a legitimate technique to counteract the limits of the medium. Art purchasers also seem to prefer some level of increased saturation. On the other hand, this can harm texture as the medium is driven to the edge of its color gamut, and turn an otherwise plausible landscape into something that looks fake, or some would say dishonest.<br />
<br />
Alternatively, you can select subjects that are inherently colorful:<br />
<br />
<a href="http://www.flickr.com/photos/msabeln/5002640391/" title="Forest Park Balloon Race, in Saint Louis, Missouri, USA by msabeln, on Flickr"><img alt="Forest Park Balloon Race, in Saint Louis, Missouri, USA" height="500" src="http://farm5.staticflickr.com/4084/5002640391_6f16753623.jpg" width="500" /></a><br />
<br />
<a href="http://www.flickr.com/photos/msabeln/5414413831/" title="Fairgrounds Park, in Saint Louis, Missouri, USA - close-up of frozen berries by msabeln, on Flickr"><img alt="Fairgrounds Park, in Saint Louis, Missouri, USA - close-up of frozen berries" height="399" src="http://farm6.staticflickr.com/5017/5414413831_009c8381db.jpg" width="500" /></a><br />
<br />
<a href="http://www.flickr.com/photos/msabeln/3901001804/" title="Klondike Park, in Saint Charles County, Missouri, USA - white sand and blue water by msabeln, on Flickr"><img alt="Klondike Park, in Saint Charles County, Missouri, USA - white sand and blue water" height="333" src="http://farm3.staticflickr.com/2527/3901001804_78cba1d6e9.jpg" width="500" /></a><br />
<br />
The blue water here caught my eye. But a white sandy beach, in the Saint Louis area? I never thought there would be such a thing nearby. Interesting, unique landscapes are not necessarily far from home. According to Flickr, there are currently only about 100 photos taken at this park, and so it is barely discovered photographically.<br />
<br />
The color of light, rather than merely the color of a subject can make an image appealing, and for this reason, many photographers avoid mid-day. Sunrise and sunset give a natural warm glow to an image, while dusk and dawn give a cool color. Please recall that highly-rated landscape photos are largely taken during these times of day. I am not saying that you can’t take an awesome shot at midday, I’m just saying that most excellent landscape photographs are <i>not</i> taken during that time of day.<br />
<br />
<a href="http://www.flickr.com/photos/msabeln/3926000078/" title="Art Museum, in Forest Park, Saint Louis, Missouri, USA by msabeln, on Flickr"><img alt="Art Museum, in Forest Park, Saint Louis, Missouri, USA" height="356" src="http://farm3.staticflickr.com/2659/3926000078_3440cf4c34.jpg" width="500" /></a><br />
<br />
<a href="http://therefractedlight.blogspot.com/2011/01/white-balance-part-1.html">Automatic white balance</a> in a digital camera attempts to subtract out the color of light, with the goal of leaving neutrally-colored subjects in real life appearing to be neutral in the captured image. For example, a white piece of paper typically reflects light of all frequencies uniformly, and in a white balanced image, the red, green, and blue channels recording the white paper will all have the same value, which shows us that the color of the light was eliminated.<br />
<br />
But sometimes we don’t want to subtract out the color of light. Many photographers, including me, will set the camera to a fixed ‘Daylight’ white balance when shooting at sunrise and sunset. This keeps a good balance between the orange color of the waning sun, and the deep blue of the sky, without neutralizing either color, and so makes the final image particularly colorful.<br />
<br />
<a href="http://www.flickr.com/photos/msabeln/4769172015/" title="Flooding on Smallpox Island parking lot, at sunset, near Alton, Illinois, USA by msabeln, on Flickr"><img alt="Flooding on Smallpox Island parking lot, at sunset, near Alton, Illinois, USA" height="334" src="http://farm5.staticflickr.com/4119/4769172015_a8215f4182.jpg" width="500" /></a><br />
<br />
Besides the color shown here, I attempted to make this subject, a bench in floodwater, more interesting by composing it precisely within the frame. Order, proportion, harmony, and symmetry in the arts are often considered to be pleasing and interesting. Disorder and disharmony in the composition of a photo can lead to rejection and anxiety in a viewer.<br />
<br />
<a href="http://www.flickr.com/photos/msabeln/3078972023/" title="Meramec River, near Eureka, Missouri, USA - view at dusk from Route 66 State Park bridge by msabeln, on Flickr"><img alt="Meramec River, near Eureka, Missouri, USA - view at dusk from Route 66 State Park bridge" height="333" src="http://farm4.staticflickr.com/3249/3078972023_55633347ab.jpg" width="500" /></a><br />
<br />
<a href="http://www.flickr.com/photos/msabeln/6337875288/" title="Clifton Heights Park by msabeln, on Flickr"><img alt="Clifton Heights Park" height="332" src="http://farm7.staticflickr.com/6103/6337875288_c253b1d43e.jpg" width="500" /></a><br />
<br />
I took a number of photos of this park, and the results were always poor, but dusk made the scene more colorful, and I think, more interesting.<br />
<br />
A brightly colored photo is appealing, since color attracts attention and color has long been considered a component of beauty. But some people think that unusually or implausibly colored shots are manipulative, garish, or pandering to baser instincts, or are somewhat lacking in true art. For example, someone told me that they disliked my balloon photo above because it was too ’typical’ of an easy, appealing photo. See the article, <a href="http://en.wikipedia.org/wiki/Red_Shirt_School_of_Photography">The Red Shirt School of Photography</a>, which should not be confused with another kind of <a href="http://tvtropes.org/pmwiki/pmwiki.php/Main/RedShirt">Red Shirt</a>.<br />
<br />
We don’t need bright color to produce an interesting landscape image. See the article <a href="http://www.dailymail.co.uk/news/article-2149899/The-American-West-youve-seen-Amazing-19th-century-pictures-landscape-chartered-time.html"><i>How the Wild West REALLY looked: Gorgeous sepia-tinted pictures show the landscape as it was charted for the very first time</i></a>. These 19<sup>th</sup> century images show good subjects, good composition, good camera work, and often show human interest, but these images are all monochrome, and yet they are quite effective.<br />
<br />
While adding saturation to an image may be questionable or even controversial, artificially adding sharpness to a digital image is almost always needed to produce a better final image. Similarly, local contrast enhancement is often an improvement. Cameras don’t see as the human eye sees, and we often have to make image corrections. Sharpening can be easily overdone, making a photo look rough; undersharpening, on the other hand, will look less than prime.<br />
<br />
<a href="http://www.flickr.com/photos/msabeln/7329957382/" title="Soft versus sharp by msabeln, on Flickr"><img alt="Soft versus sharp" height="500" src="http://farm9.staticflickr.com/8020/7329957382_bda0c70944.jpg" width="377" /></a><br />
<br />
This shows the same image, using different resizing algorithms. The first is softer than the second, and I think it is less interesting because of that. Please note that there is a quality of blur than can differ between images, and photographers highly value those lenses known for good blur. However, in the digital age, we also need to take into consideration the varying quality of resizing and sharpening algorithms. A quality high-resolution image, properly resized, will look generally look better than a similar image of the same size taken with a lower resolution camera.<br />
<br />
<a href="http://www.flickr.com/photos/msabeln/3045004810/" title="Lafayette Square Neighborhood, in Saint Louis, Missouri, USA - Lafayette Park - statue of President George Washington by msabeln, on Flickr"><img alt="Lafayette Square Neighborhood, in Saint Louis, Missouri, USA - Lafayette Park - statue of President George Washington" height="500" src="http://farm4.staticflickr.com/3136/3045004810_97bd394b23.jpg" width="333" /></a><br />
<br />
It helps to look at your successful photos — those that you and other people like — and attempt to determine <i>why</i> they may be successful. This image is popular. I think that the quality of the light here, and the color of the foliage, helps strengthen it.<br />
<br />
Depending on where you live in the world, seasonal changes may help make landscape photographs more interesting. Here we see the trees leaves changing color and some flowers in the foreground. Normally, overcast skies are inferior for landscapes, but here we have good lighting over most of the scene, and the color of the fall foliage makes the scene more interesting. Because of the soft lighting, I was also able to bring up lots of significant detail in the main subject, the statue, and not lose too much texture in the shadows. I think I was fortunate in that the bronze color of the statue was significantly distinct from its background.<br />
<br />
Consider an image taken here in summer under broad daylight — would a uniform mass of dark green tree leaves in the background help or harm it? What about the harsh lighting of midday? Or how about in winter — would bare tree branches add anything positive to the image? Or, was this image taken under fortunate conditions?<br />
<br />
<a href="http://www.flickr.com/photos/msabeln/4545202920/" title="Howell Island Conservation Area, in Chesterfield, Missouri, USA - night view of Centaur Chute from causeway to island by msabeln, on Flickr"><img alt="Howell Island Conservation Area, in Chesterfield, Missouri, USA - night view of Centaur Chute from causeway to island" height="332" src="http://farm5.staticflickr.com/4023/4545202920_a8018a7580.jpg" width="500" /></a><br />
<br />
<i>I hand-held the camera while taking this image of an island in a river; the scene was illumined by the light of a half-moon obscured by clouds, and underexposed by five stops. Only about two digital bits of usable tonality are left in this image, giving us a severely abstracted representation of the original scene. This might be suitable for exhibit in an avant-garde gallery, since it comes very close to the limits of photography, and is almost a pure exercise in form and shadow. It is not, however, a pretty landscape photo.</i><br />
<br />
Imagine if the Superbowl were to be stripped of all inessential elements such as the flashy advertising and elaborate half-time show, and was only a well-played football game. True football fans would perhaps prefer this kind of less-distracting, more focused kind of a game. Likewise, many in the art world more appreciate a photograph that cuts distractions to a minimum, a photograph that gets to the core of photography and the subject. The general public may beg to differ, for the Superbowl also has a social aspect, and so we usually have a disconnect between the desires of experts and the wider population, in sports and in the arts. For example, if you visit the Grand Canyon with your family, you most certainly ought to photograph your kids in front of the canyon, for the sake of the family album, but don’t expect that to appeal to fine arts professionals.<br />
<br />
The English word ‘abstract’ comes from the Latin ‘<i>abstractus</i>,’ meaning “drawn away,” and so that word usually means is a taking away of details. One photograph that is more abstract than the other will have less detail than the one that is less abstract. In my opinion, because of how human perception and memory works, I think that a certain level of abstraction is essential in quality photography. Of course, every photograph draws a lot away from the original subject, but I think that even more abstraction than a typical snapshot is necessary to have a good final image.<br />
<br />
Herein lies a paradox. Image quality is usually preferred. Most viewers of two otherwise identical photographs will prefer the one that has higher image quality. But most viewers will also prefer an image that has less inessential detail than another. In one study of thousands of photographic portraits, test subjects, on average, rated images with narrow depth of field higher than those with deep depth of field — that is, they rated the more abstract images much higher than those which were less abstract. High quality is appreciated, but having a more abstract final image is <i>also</i> appreciated.<br />
<br />
<a href="http://www.flickr.com/photos/msabeln/5677067920/" title="Centaur Chute and Howell Island at night, off of the Missouri River, in Chesterfield, Missouri, USA by msabeln, on Flickr"><img alt="Centaur Chute and Howell Island at night, off of the Missouri River, in Chesterfield, Missouri, USA" height="331" src="http://farm6.staticflickr.com/5024/5677067920_6c1928fc68.jpg" width="500" /></a><br />
<br />
Here is roughly the same scene as above, but taken with more care. This is certainly more suitable for a popular photo book, but notice how there is more distracting detail, such as the leaves on the right and the logs in the center. I think a superior image would eliminate these details, and zoom in a bit closer to the main channel of the watercourse. I have a number of images of this spot taken at midday, but they are hardly notable.<br />
<br />
<a href="http://www.flickr.com/photos/msabeln/2596424783/" title="Bottomland forest at Castlewood State Park, in Ballwin, Missouri, USA 12 by msabeln, on Flickr"><img alt="Bottomland forest at Castlewood State Park, in Ballwin, Missouri, USA 12" height="375" src="http://farm4.staticflickr.com/3296/2596424783_9695b38100.jpg" width="500" /></a><br />
<br />
I thought that this image of a forest path would be interesting, but it turned out disappointing. We have a nice winding path, but the tree trunks and branches have little order and so are confusing.<br />
<br />
<a href="http://www.flickr.com/photos/msabeln/2650035895/" title="Bourbeuse River, in Noser Mill, Franklin County, Missouri, USA - looking west from above by msabeln, on Flickr"><img alt="Bourbeuse River, in Noser Mill, Franklin County, Missouri, USA - looking west from above" height="500" src="http://farm4.staticflickr.com/3002/2650035895_fe2c060703.jpg" width="375" /></a><br />
<br />
This visually similar photo of a river gets far more notice. The texture on the leaves is complex, but is more uniform than in the previous photo. This photo is simpler than the previous one. One flaw, however, is the dam near the top of the stream, which breaks the visual flow of the image.<br />
<br />
The size of an image, its resolution, and the distance from which the image is viewed is important. Many photographers who intend to show their photos primarily on the Internet will produce images that are significantly simpler than those who sell large, high resolution prints. Significant, eye-catching detail on a large print can be much smaller than what can be seen on a thumbnail image. Very large prints are a trend in high-end fine arts photography, and these huge prints — often yards wide — tend to have an extreme amount of detail. Larger, more detailed images can have a more complex composition, since the viewer will visually subdivide the image into smaller compositions — and the task for the photographer is to make these smaller compositions harmonize with each other.<br />
<h3>
Final Ideas</h3>
<div>
<ul>
<li>Landscapes are not about you, for they are things in themselves, and so a landscape photo ought to clearly show what is true about the scene you are photographing. A heavily manipulated and composited image may be an interesting exercise in digital art, but it is not a landscape photo. If your intent is to show a mood or feelings, then more artifice is allowable, and necessary — your viewers ought to be given clues that your photo is not a natural landscape. </li>
<li>Unlike many other forms of photography, landscapes do not suffer gimmicks or mistakes very well. Use good gear and technique, and get out of the way of the scene itself. Composition is likely to be difficult, so take your time in finding a good camera position. </li>
<li>Landscapes are subtle, and photography tends to flatten out subtleties, which require us to shoot something extraordinary merely to make the image look normal. The final image ought to please, delight, or move your audience; if it does not, then the image is a failure. </li>
</ul>
<div>
See also the article <a href="http://therefractedlight.blogspot.com/2012/06/giving-credit.html">Giving Credit...</a>; thanks to my friend Tina, who has a good photographic eye and often suggests good camera positions, and who is very supportive. </div>
</div>Mark S. Abelnhttp://www.blogger.com/profile/06692448528819277158noreply@blogger.com2tag:blogger.com,1999:blog-8768375296475349032.post-86524003761021830552012-05-12T14:30:00.003-05:002012-05-12T14:30:58.105-05:00Digital Forensics<span class="Apple-style-span" style="font-size: x-large;">HAS A PHOTO</span> been severely altered? I’m sure you’ve seen examples of photo forgery lately in the news media. How exactly can you detect if a photograph is a composite of more than one original camera image? How can you tell if the clone tool has been used on an image, replacing scene detail at one part from another?<br />
<br />
See the <a href="http://www.cs.dartmouth.edu/farid/Hany_Farid/Research/Entries/2011/6/5_Digital_Forensics.html">Digital Forensics</a> webpage for original research by Hany Farid and his group at Dartmouth College. <br />
<br />
Of particular interest is a large PDF file of <a href="http://www.cs.dartmouth.edu/farid/downloads/tutorials/digitalimageforensics.pdf">lecture notes</a>, which starts out with a number of famed doctored photographs from history, and then immediately jumps into complex mathematical examinations of digital images.Mark S. Abelnhttp://www.blogger.com/profile/06692448528819277158noreply@blogger.com0tag:blogger.com,1999:blog-8768375296475349032.post-57553956217795527852012-05-08T17:10:00.002-05:002012-05-08T17:10:51.160-05:00Ars Photographica<blockquote>
<b>Ars Photographica</b><br />
<span class="Apple-style-span" style="font-size: x-small;"><a href="http://en.wikipedia.org/wiki/Pope_Leo_XIII">Vincenzo Gioacchino Raffaele Luigi <i>Cardinal</i> Pecci</a></span><br />
<span class="Apple-style-span" style="font-size: x-small;">1867 </span><br />
<br />
Expressa solis spiculo<br />
Nitens imago, quam bene<br />
Frontis decus, vim luminum<br />
Refers, et oris gratiam.<br />
<br />
O mira virtus ingeni<br />
Novumque monstrum! Imaginem<br />
Naturae Apelles aemulus<br />
Non pulchriorem pingeret.<br />
<br />
<br />
<b>On Photography</b><br />
<span class="Apple-style-span" style="font-size: x-small;">(translated by H.T. Henry, 1902)</span><br />
<br />
Sun-wrought with magic of the skies<br />
The image fair before me lies:<br />
Deep-vaulted brain and sparkling eyes<br />
And lip's fine chiselling.<br />
<br />
O miracle of human thought,<br />
O art with newest marvels fraught -<br />
Apelles, Nature's rival, wrought<br />
No fairer imaging!</blockquote>
This poem was set to music by <a href="http://www.gavinbryars.com/">Gavin Bryars</a>, and is available on Amazon: <a href="http://www.amazon.com/gp/product/B001A9V6XK/ref=as_li_ss_tl?ie=UTF8&tag=romeofthewest-20&linkCode=as2&camp=1789&creative=390957&creativeASIN=B001A9V6XK">On Photography - Bryars, Maskats, Silvestrov</a><img alt="" border="0" height="1" src="http://www.assoc-amazon.com/e/ir?t=romeofthewest-20&l=as2&o=1&a=B001A9V6XK" style="border: none !important; margin: 0px !important;" width="1" />.Mark S. Abelnhttp://www.blogger.com/profile/06692448528819277158noreply@blogger.com0tag:blogger.com,1999:blog-8768375296475349032.post-39876695925497065912012-04-21T09:48:00.001-05:002012-04-21T14:24:29.631-05:00“St. Louis Parks”<blockquote>
</blockquote>
<span class="Apple-style-span" style="font-size: x-large;">ST. LOUIS PARKS</span> — a new book from Reedy Press — with photography by yours truly, including the photo on the cover:<br />
<br />
<a href="http://www.flickr.com/photos/msabeln/7098797917/" title="St Louis Parks cover_high by msabeln, on Flickr"><img alt="St Louis Parks cover_high" height="386" src="http://farm8.staticflickr.com/7189/7098797917_afb5c0c83d.jpg" width="500" /></a><br />
<br />
This view shows the World’s Fair Pavilion atop Government Hill, in Forest Park, in the City of Saint Louis, Missouri. Teenagers are seen here enjoying the cool water of the fountain on a warm June day. I think this photo adequately captures the joy and simple pleasure that ought to be found in a pleasant park.<br />
<br />
This book contains over a hundred of my photos of parks located within the City of Saint Louis. Click here to get your own copy of this book:<br />
<form action="https://www.paypal.com/cgi-bin/webscr" method="post">
<input name="cmd" type="hidden" value="_s-xclick" /><br />
<input name="hosted_button_id" type="hidden" value="9R8Y876N3FTAA" /><br />
<input alt="PayPal - The safer, easier way to pay online!" border="0" name="submit" src="https://www.paypalobjects.com/en_US/i/btn/btn_buynowCC_LG.gif" type="image" /><img alt="" border="0" height="1" src="https://www.paypalobjects.com/en_US/i/scr/pixel.gif" width="1" /></form>
<br />
From the publisher, Reedy Press:<br />
<blockquote class="tr_bq">
<span style="font-family: Verdana; font-size: 27pt;">St. Louis Parks </span><span style="font-family: Verdana; font-size: 14pt;">By NiNi Harris and Esley Hamilton, Foreword by Peter H. Raven</span> </blockquote>
<blockquote class="tr_bq">
<span style="font-family: Perpetua; font-size: 14pt;">St. Louis has great parks. And St. Louisans are passionate about them. </span><span style="font-family: Perpetua; font-size: 14pt; font-style: italic;">St. Louis Parks </span><span style="font-family: Perpetua; font-size: 14pt;">delivers portraits of St. Louis City and County parks, both major and minor, that prove why these common spaces are crucial to the region’s way of life.<br />
</span><span style="font-family: Perpetua; font-size: 14pt;"><br />
</span><span style="font-family: Perpetua; font-size: 14pt;">Acclaimed local historians NiNi Harris and Esley Hamilton take readers through the city and county, respectively. Starting with the establishment of Lafayette Park from thirty acres of common fields in 1836, Harris covers the creation of gems like Tower Grove Park, the nation’s finest Victorian Park, and the dazzling, 1,293-acre Forest Park, while including Citygarden, and its interactive artwork, in the heart of downtown.<br />
</span><span style="font-family: Perpetua; font-size: 14pt;"><br />
</span><span style="font-family: Perpetua; font-size: 14pt;">In the county, Hamilton highlights one-of-a-kind attractions like the renowned Museum of Transportation and Laumeier Sculpture Park, the Butterfly House and St. Louis Carousel at Faust Park, a farm zoo at Suson Park, and the military museums at Jefferson Barracks. In both sections, the authors recognize the citizens, civic leaders, and architects whose work delivered to all St. Louisans picturesque landscapes, ball fields, tennis courts, natural savannahs, and grasslands filled with wildlife, and trails that lead runners through forests and by shimmering lakes.<br />
</span><span class="Apple-style-span" style="font-family: Perpetua; font-size: 19px;"><br />
</span><span class="Apple-style-span" style="font-family: Perpetua; font-size: 19px;">Dramatic photography by Mark Scott Abeln and Steve Tiemann complement the essays. The photographs evoke the unique character and history of the individual parks. They visualize the importance of green space for both escaping and coming together as a community.</span><span style="color: #fdcc0f; font-family: Verdana; font-size: 14pt;"> </span> </blockquote>
<blockquote class="tr_bq">
<span style="font-family: Perpetua; font-size: 14pt; font-weight: 700;">ABOUT THE AUTHORS AND PHOTOGRAPHERS</span> </blockquote>
<blockquote class="tr_bq">
<span style="font-family: Perpetua; font-size: 11pt; font-weight: 700;">NiNi Harris’s </span><span style="font-family: Perpetua; font-size: 11pt;">earliest memory is of an early autumn evening, picking up acorns as she and her father walked along Bellerive Boulevard to Bellerive Park. Her great- great-grandfather’s first job when he arrived in St. Louis in 1864 was planting trees in a St. Louis park. This is her tenth book on St. Louis history and architecture.<br />
</span><span style="font-family: Perpetua; font-size: 11pt;"><br />
</span><span style="font-family: Perpetua; font-size: 11pt; font-weight: 700;">Esley Hamilton </span><span style="font-family: Perpetua; font-size: 11pt;">has been working for the St. Louis County Department of Parks and Recreation as historian and preservationist since 1977. Among preservation-<br />
ists in the St. Louis region, Hamilton’s is a household name. He teaches the history of landscape architecture at Washington University and serves on the board of the National Association for Olmsted Parks.<br />
<br />
<b> Mark Abeln</b> is a native of St. Louis and attended college at Caltech, in Pasadena, California. Mark started taking photography seriously after he took disappointing photos of an important subject. He spent the next years learning the art of photography, and his photos can now be found in numerous publications as well as on his website “Rome of the West.”<br />
<br />
<b> Steve Tiemann</b> graduated from McCluer High School and went on to obtain his forestry degree from the University of Missouri at Columbia. Steve has enjoyed his career as a park ranger and park ranger supervisor with St. Louis County Parks for nearly thirty years. He tries to be in ready mode with a camera while patrolling on foot or bike.</span></blockquote>
Mr. Peter Raven is President Emeritus of the famed Missouri Botanical Garden.<br />
<br />
This book’s publication date is May 1<sup>st</sup>. You can order a copy now:<br />
<form action="https://www.paypal.com/cgi-bin/webscr" method="post">
<input name="cmd" type="hidden" value="_s-xclick" /><br />
<input name="hosted_button_id" type="hidden" value="9R8Y876N3FTAA" /><br />
<input alt="PayPal - The safer, easier way to pay online!" border="0" name="submit" src="https://www.paypalobjects.com/en_US/i/btn/btn_buynowCC_LG.gif" type="image" /><br />
<img alt="" border="0" height="1" src="https://www.paypalobjects.com/en_US/i/scr/pixel.gif" width="1" /></form>
<br />
You can also purchase my earlier book of photography, <i>Catholic St. Louis: A Pictorial History</i>:<br />
<form action="https://www.paypal.com/cgi-bin/webscr" method="post">
<input name="cmd" type="hidden" value="_s-xclick" /><br />
<input name="hosted_button_id" type="hidden" value="UDWHDJ3J266SU" /><br />
<input alt="PayPal - The safer, easier way to pay online!" border="0" name="submit" src="https://www.paypal.com/en_US/i/btn/btn_buynowCC_LG.gif" type="image" /><br />
<img alt="" border="0" height="1" src="https://www.paypalobjects.com/en_US/i/scr/pixel.gif" width="1" /></form>Mark S. Abelnhttp://www.blogger.com/profile/06692448528819277158noreply@blogger.com0