Friday, August 19, 2011

Anniversary of the Daguerrotype

ON AUGUST 19th 1839, the government of France released the patent on the Daguerrotype to the world, which made photography commonplace.

The promise of photography had been around for decades: Joseph Nicéphore Niépce had been working on his heliograph process since 1793. His problem was that his photographs quickly faded, and his oldest surviving images date from about 1825 or 1826. Starting in 1829, Niépce partnered with the prominent painter Louis-Jacques-Mandé Daguerre to further develop the heliograph process.

Daguerre and Niépce worked on the physautotype, a process where lavender oil dissolved in alcohol was applied to a silver plate to produce an image. This process required that the plate be exposed in a camera obscura for many hours. After Niépce's death in 1833, Daguerre discovered that exposing a silver plate to iodine vapors before exposure, and then mercury vapor afterwards, could produce a plate far more sensitive to light. Daguerre patented his process in 1839, and then gave his patent to the French government in exchange for a pension for himself and Niépce's heirs.

There was a boom in professional photography. By the 1850s, photography studios could be found in nearly every major city.

Monday, August 1, 2011

1 Bit Depth Cameras

PLEASE EXAMINE these two photos:

Flower-8 bit-reduced

Flower-1 bit-reduced

Not too much difference.  The second photo has perhaps slightly greater contrast, saturation, and sharpness.

Now both these images came from the same RAW file, but had different processing; I reduced these images to this size using Photoshop's Bicubic algorithm.

Now let's zoom far into a small part of the original images:

zoomed

The second image has one bit per color channel!  Each of the red, green, and blue color channels is either fully on or off, and so we have only 8 possible colors per pixel versus millions of possible colors in the top image. But when we reduced the image in size, these pixels were averaged, and so appear normal in the final image. By averaging, we can obtain every possible color in the sRGB gamut.

Lousy, horrible image quality, when we pixel-peep at 100% resolution; but it looks just fine at the resolution shown on the screen.

But we really don't have to do pixel averaging to get a plausible image:

Union Station in Saint Louis - 1 bit depth image

The pixels on this image are black and white only — no shades of gray. This image is interesting, but it has poor image quality; but if we had more and smaller pixels, an image like this would appear to the eye to be a continuous grayscale of high quality.

The notorious megapixel wars of a few years ago had new models of cameras coming out with ever higher pixel densities, at the expense of higher noise. When I got my Nikon D40 back in 2008, I specifically chose that model because it had the best balance between noise and pixel density. Since I was going to produce full-resolution images for the book Catholic St. Louis: A Pictorial History, I needed a minimum number of high-quality pixels for full-page images. Now I could have used a higher pixel density camera, but at that time I had problems with image downsizing, which harms sharpness.

The problem with high megapixel, small sensor cameras is noise; but noise is not a problem if you don't look too closely: and clever image manipulation can make this noise invisible or unobjectionable in the final image. Let's not forget that the single best predictor of image quality is sensor size. You can get a decent small image with a small sensor, but a large sensor will deliver high image quality with both small and large images.

What this little experiment tells us is that we don't need high quality pixels to get a high quality final image, if we have enough pixels.

Suppose some manufacturer could cram 250 megapixels into a typical ASP-C sized sensor as found in consumer grade DSLR cameras. If you zoom into your image at 100% resolution, the image might appear to be only noise, but by clever processing you could produce excellent large images with superb detail.

We can learn a few lessons from old film photography. The crystals of silver halide embedded in film change chemically when exposed to light. Please be aware that light, when examined closely, appears to be made up of little discrete chunks called photons. Whenever a number of photons would hit a piece of silver halide, no matter how large or small a crystal, the crystal would change. And so, large crystals are more sensitive to light than smaller crystals, since roughly the same number of photons would likely change either size crystal. The crystals that absorbed photons would then remain in the negative, and would appear black, while the crystals that did not absorb photons would be washed away during developing, and so only the transparent film remain.

So in a certain sense, digital photography is more analog than film photography, and film is more digital than digital sensors.  A film grain is either on or off, with no intermediate steps, no shades of gray. In developed film there is either a grain at any given part of a negative, or there isn't: it is all black or white. Only when we look at the negative from a distance can we perceive an image.

Film grain has many advantages. Because the grains have random shapes, we do not see aliasing in film images — we don't see digital jaggies or interference patterns. By mixing various sizes of grains, we can expand the dynamic range of film: some grains will be more likely than others to be exposed, and so we can get both good shadow and highlight detail, as well as a gradual fall-off in highlights, instead of an abrupt cutoff in highlights as we see with digital cameras (Some Fuji digital cameras had two sizes of sensors to avoid this a bit). Film grain can also add perceived texture as well as sharpness.

Digital photography, following the long example of film technology, could utilize very large numbers of tiny pixel sensors of varying sizes and shapes, and these sensors would register only on or off signals: they would be 1 bit sensors, just like film grain. Although pixel peeping on such an image would be painful, we should expect to obtain superior final images, with much higher dynamic range, good fall-off on highlights, and nice texture and sharpness without digital artifacts. We also would have better control over digital noise, particularly in the shadows.

One interesting advantage to having super high pixel densities is that camera manufacturers could include sensors of various color sensitivities, greatly expanding the color gamut of the sensor, as well as provide far more accurate color rendition. These extra color sensors could emulate the Purkinje effect, and be able to accurately capture an image that corresponds to how humans see in dim lighting, particularly at dusk. Some sensors could be sensitive to infrared, and others to ultraviolet. We have so many pixels in our camera that it wouldn't be much of a waste to assign a certain number of them for odd uses: it would hardly cause a degradation for most images. Various sizes of pixels could be scattered across the sensor, giving better dynamic range; random shapes and sizes could produce images less prone to aliasing, as well as providing more sharpness.

Sensor design would be challenging, for various trade-offs will need to be made: what would be a good balanced design? Blue sensitivity is poor in the standard Bayer design, and red sensors could be boosted in number also. Why can't we also include a number of panchromatic sensors as well, instead of relying mainly on green for luminance? What percentage of the sensor area should be dedicated to big sensors, how many tiny pixels should be included? Can small pixels be superimposed upon big pixels? If varying shapes of pixels are used, what shapes are best; pointy or round? This design offers great potential. Manufacturers could even incorporate Foveon-style sensors, where multiple colors are sensed at the same location. They could even get very clever, by changing the size of the sensors according to depth.

RAW conversion for such a sensor would be difficult — or rather interesting — and would likely require a much faster on-board computer in the camera. But for a hypothetical 250 megapixel camera, we would have precisely one bit per pixel, and so the RAW file size would be only about 31 megabytes in size before compression: not terribly much larger than what we have today. JPEGs produced by this kind of camera would not be full resolution, but then we don't want full resolution, we need to have some sort of averaging of the pixels. But even a lightly processed TIFF file would produce a superior image; up close, we would perceive something that would look like photographic grain, which is much more organic than digital pixels.

This kind of camera would give us unprecedented control over the RAW conversion process. You want an infrared image? You got it. You want ultraviolet? Easy. You want to trade off color accuracy for dynamic range? You want to bring up detail in both the highlights and shadows? A one-bit depth camera can be very useful.