Sunday, December 26, 2010

Focal Length

PHOTOGRAPHY IS NOTORIOUS for its many numbers that a photographer needs to know about. Focal length is one of those numbers.

Most inexpensive compact cameras have an easy-to-use zoom feature, and casual photographers can merely set the zoom to whatever they want without worrying about any confusing numbers. But confusion can occur if they use a camera with interchangeable lenses, for then they need to learn about focal length.

Fortunately for beginners, the kit lens that comes with most inexpensive interchangeable-lens cameras is adequate for most purposes. These cameras may even come with two lenses; for example, 18 to 55 millimeters and 70-200 mm. All you really need to know is that large numbers zoom onto distant objects, while smaller numbers capture ‘more of the scene’.

It's easy. If you want to get the whole scene in your photo, you set your lens to 18 millimeters. If you want to zoom in, you set your lens to 55 mm. But then a friend asks you to take a photo of her, using her camera. You stand about ten feet away, and taking note of the millimeter markings on her lens, you set it to 18 millimeters and then look through the viewfinder — and you are surprised that she appears smaller in the viewfinder than you would expect.  She suggests that you zoom in a bit, using a setting of about 30 millimeters. So 18 mm on your camera is the same as 30 mm on her camera. As it so happens, a nearby photographer is taking a photo of the same scene: his camera is large and he tells you that he is using a 55mm lens — but he too is taking in the whole scene, for 55 millimeters is a wide-angle lens for his camera. You learn that focal length settings are not necessarily commensurate between cameras.

A pinhole lens

Light usually travels in a straight line through air, and so we can construct a very crude, but workable, lens just by making a small hole in an opaque surface. Light will travel in a straight line from an object, through this pinhole, where it reaches its destination, which may be light sensitive film or a digital camera sensor.

Pinhole lens

The focal length of the pinhole lens is merely the distance from your sensor to the pinhole. To illustrate the angle of view of this pinhole lens, draw a line which is the length of your sensor: say, 35 millimeters wide. Perpendicular and centered on this line, draw a dot, which represents your pinhole. Draw a straight line from the edges of the sensor through the dot: this shows the angle of view of your pinhole lens.  If you bring the pinhole closer, the view gets wider; and draw it farther away, and the angle of view gets narrower. You should see that for any given focal length, a larger sensor will give you a wider angle of view. Using trigonometry, you can calculate the angle of view for any combination of sensor size and focal length.  Suppose you have two cameras, one with a sensor twice as wide as the other: doubling the focal length of the pinhole lens on the larger camera will give you precisely the same angle of view as the smaller camera.

Now take a glass lens, and focus it on some object very, very far away, and note the size of the object projected on your sensor. Then take a pinhole lens, and move it closer or farther from the sensor until its projected image is precisely the same size as the image formed by the glass lens. The distance from the pinhole to the sensor is the effective focal length of the glass lens. An 18 millimeter glass lens projects the same size image as would a pinhole located 18 millimeters from the sensor.

But please note that this equivalence between a glass lens and pinhole lens only works when the distance from lens to the object is much greater than the distance from the lens to the sensor. A regular camera lens, after all, is not a tiny dot like our pinhole lens, but rather is made of multiple thick chunks of glass. If you focus a glass lens upon a subject very close by — like when using a macro lens to focus on a small insect — then its effective focal length will change considerably. Click here for more details.

Also note that this equivalence only works when a glass lens produces a rectilinear image — where straight lines in the scene translate to straight lines on the image. Fisheye lenses are a bit more complicated since they produce so much distortion.

Equivalent focal length

Serious photographers use seriously large cameras. This is for the simple reason that large camera sensors — either digital or film — naturally produce cleaner, sharper, more detailed images. Click here to see why. Now photojournalists also want good picture quality, but they also lug cameras around all day long, and so they need a camera that is a good compromise between weight and image quality. Photojournalists are typically the most commonly-seen type of professional photographer — and amateurs, in imitation, started using similar equipment, which included the 35mm film format. Vast numbers of amateur-grade, interchangeable-lens 35 millimeter film cameras were produced, most notably by the same manufacturers who made the photojournalist cameras.

People became quite used to the sizes of lenses for these cameras.  For example, a 50mm lens produced an image that looked rather normal — not too zoomed in, and not too wide. Lenses in the range of say 105 millimeters or larger were good for portraits, while 30 millimeter or smaller focal lengths were good for architectural interiors. Now, please recall that these focal length sizes are for 35 mm film; a medium-format camera would use longer focal lengths for the same purposes, while an inexpensive consumer camera would use much shorter focal lengths.

Eventually the manufacturers of photojournalist cameras went digital; alas, however, due high cost, the digital sensor size was smaller than the beloved 35 millimeter film. Because people were so familiar with the focal lengths used by 35 millimeter cameras, manufacturers stated equivalent focal lengths. So an 18 mm lens used with the new digital sensor is said to be equivalent to (that is, provides the same angle of view) a 27 mm lens used on a 35 mm camera. A 35mm lens on these digitals is equivalent to a 50 mm lens on a 35 mm camera. Is this helpful, or confusing?

Because the 35mm format was rather standard, digital cameras with sensors smaller than 35 mm are often called cropped-sensor cameras, although this terminology can be rather confusing to beginners. I find that beginners often get hung up on the marketing term ‘crop factor’. A 20 mm lens on a camera with a crop factor of 1.5 will provide the same angle of view as a 20 mm x 1.5 = 30 mm lens on a 35 millimeter film camera. Now this terminology is likely only useful if you are very familiar with the old 35 millimeter cameras and their lenses, and is otherwise confusing.

If you are a beginner, I would suggest you forget all about equivalent focal lengths and crop factors. Instead, find out the size of your sensor, in millimeters. For example, many consumer digital SLR cameras have a sensor that is about 30 millimeters across on the diagonal. A wide-angle lens will have a value that is less than this measurement, while a telephoto lens will be much larger than this value. A normal lens — for this sensor — will be equal to this size or perhaps a bit larger.

Tuesday, December 21, 2010

A Digital Color Wheel

MOST COLOR WHEELS you find at art stores, or images of wheels you find with Internet searches aren't too helpful for digital photography. While they may illustrate the visual order of the colors, they aren't too helpful if you want to mix colors digitally. They may even be quite misleading. So I created my own color wheel using the primary colors found in the sRGB standard, which is used by digital cameras, computers, and high-definition television.
Color wheel according to the sRGB standard

This color wheel shows the correct relationships between the red, green, and blue colors that are primary in the sRGB color system, as well as their opponent or secondary colors of cyan, magenta, and yellow.

These primary and secondary colors are the brightest and most saturated colors that can be generated from the sRGB color system. The coding in each color circle gives you the formula for generating the color: for example, cyan is GB, which means that Red=0, while Green and Blue = 255. Halfway in between the primaries and secondaries are bright tertiary colors. These tertiaries are coded with lower-case letters indicating half a given color: for example sky blue is coded gB, meaning Red=0, Green=128 and Blue= 255.

Some old color wheels use red, yellow, and blue as primary colors; others use green, purple, and orange as primaries. This is misleading for computer use since they don't give us a good idea of opponent colors.  In this color wheel, if you mix together equal portions of colors opposite to one another, you will get a middle gray color; mixing together blue and yellow gives you a gray where the red, green, and blue values all equal 128.

If your images have a color cast, you can achieve white balance by moving towards the opposite color. An image that is too yellow needs more blue, an image that is too green needs more magenta.

UPDATE: My use of a value of 128 for the tertiary colors is not correct, since 128 is NOT the middle tone. It is for this reason that the wheel does not appear to be visually uniform: the tertiaries appear to be somewhat dark. Updated wheel can be found here.

Sunday, December 19, 2010

sRGB Colors Out of Gamut

YOU OWN AN inexpensive desktop color printer. You have a digital camera, and you want to make prints. You print your images, and your final photos are disappointing. Does this sound familiar?

This is the color gamut problem: inexpensive desktop printers — those with four ink colors (cyan, magenta, yellow, and black) — cannot reproduce all of the colors that are produced by a digital camera. The best way around this is to get a printer that has more ink colors — but these can be expensive. And so the best alternative is to process your images to make the most of your printer's limited color gamut.

Here are the three primary colors in the sRGB color system:

RGB out of gamut

In the wide strips, we have one of the pure sRGB primary colors going from a value of 0, which is black, to 255 which is the brightest pure color that can be represented by the sRGB system.

Do you see the blue line at the bottom of each strip? This is the color gamut limit of four-color commercial printing presses and inexpensive desktop printers (this color space is abbreviated CMYK, after the four ink colors used: cyan, magenta, yellow, and black). Everything above the lines cannot be accurately printed — which is most of the image. Note that all the bright primary colors are out of the CMYK gamut. The narrow strips on the right are an approximate representation of the colors you will get from an inexpensive printer. Note that greens and blues are particularly poor and relatively unsaturated.

When we mix colors together in sRGB, we still see the same problem:

Red-Green showing CMYK gamut

Here, red goes from zero on the left to 255 on the right; from the bottom, green goes from zero to 255 on the top. Red and green mix together to make yellow.  The areas surrounded by the blue lines are colors that are within the CMYK color gamut! Reds, oranges, greens, and some leaf green colors cannot be accurately portrayed by CMYK, in fact, most of the colors in this mixture cannot be printed.

Other color mixtures are hardly better:

Red-Blue showing CMYK gamut

Red going across, blue going up.  Again, most of the image is not accurately printable.

Green-Blue showing CMYK gamut

Green across, blue going up. This is somewhat better, but you still can't print decent primary colors.

Things do get better when we have mixtures of all three colors. Here are mixtures of two colors; in each, the third color is set to 50% of its maximum value:

Mixed colors

The top image has dark blue mixed in, the middle dark green, and the bottom a dark red.  The printable color gamut is expanded by the addition of the third color.  If we had a pure grayscale image, then all the gray tones will be printable.

Real-world photos typically don't have too many pure, saturated reds, greens, and blues, and so the out-of-gamut problem may be a bit less prominent that what we see here. But most images will have at least some colors that can't be printed:

Saint Louis Zoological Garden, in Saint Louis, Missouri, USA - snowman with out of gamut colors

In this image of a snowman the out of gamut regions are seen on the right, painted in green. If you were to print this image on a four-color printer, these color regions would look a bit flat and unsaturated. You will lose detail also.

But CMYK gives as well as takes away. Even though we can not print the bright red, green, and blue primary colors, CMYK has its own primary colors: cyan, magenta, and yellow, which are typically brighter and more saturated than what you have with sRGB. You could process your images to take advantage of these colors.

To get an overview of these color systems, you may want to take a look at some of these articles:
Color Spaces, Part 1: RGB
An RGB Quiz
Color Spaces, Part 2: CMYK
Part 2 of "Color Spaces, Part 2: CMYK
A CMYK Quiz
When processing for print, you want to emphasize the colors the printer can print, while toning back the colors that are out of the printer's gamut. Following is a relatively simple process where you can make the most of your images in Photoshop.

Convert your image to a wide-gamut color space. Typically Adobe RGB is used. If you shoot RAW, Photoshop's Adobe Camera RAW (ACR) program can select this color space upon import — this would be useful for best quality. Some cameras can shoot JPEG images in Adobe RGB, but I would suggest not using it unless you really know what you are doing.  Select the menu item Edit, Convert to Profile...

Convert to Profile

Turn on the Gamut Warning feature in Photoshop:

Gamut Warning

If your target printer has a color profile installed in Photoshop, go to the Custom... menu and select it instead of CMYK. Following shows an image with the gamut warning on; here I have the warning set to gray, but you can change the color to be more visible on a particular image.

Gamut warning on image

Select the Image, Adjustments, Hue/Saturation... menu item:

Hue-Saturation dialog box

Note that the drop-down list has both the RGB and CMYK primary colors. For each of the color classes which are out-of-gamut, adjust the Saturation and Lightness sliders until the Gamut Warning turns off.  You can be as careful or as sloppy as you want here, by adjusting the slider on the bottom. You can also select individual colors with the eyedropper tool. (The middle part of the slider shows the colors that will be fully corrected. You can adjust the outside parts of the slider for good blending.)

Adjusting reds

Here I brought the red bow tie into the CMYK color gamut by darkening and desaturating the color range; but be aware that there may be more than one way to bring it into gamut, with some being better than others. Were I being more careful, I would have done this on a layer with a mask so as not to also desaturate the snowman's smile. Then I corrected the the blue part of the image to bring it within the CMYK gamut. In other images you may also have to tone down the bright primary green colors.

Next we can enhance the printer's primary colors.  We go through the same process as before, but we work with the cyan, magenta, and yellow color ranges, increasing brightness and saturation. The major improvement we can make here are with the yellow colors, which I was able to saturate and brighten considerably. You can brighten and saturate until the Gamut Warning turns on; then you've gone too far. (However, it is OK if some small parts of your image are out of gamut... you just don't want too much over a broad area, otherwise you will lose detail.)

This is a bit of a leap of faith, since you most likely cannot see the final results of your editing: it is out of your monitor's gamut.  If you have areas of your image that ought to have lots of bright cyan, magenta, or yellow ink, you can place an eyedropper tool on the spot and measure the CMYK values directly. If a spot is supposed to have a very bright yellow component, the brightest you can get, then that spot, after your processing, ought to be rather close to having 100% yellow ink.

Purists may insist that all this manipulation is ‘inauthentic’ but in reality this scene greatly exceeded the color gamut and dynamic range of my digital camera; in fact this is a blend of three separately exposed images. So we are justified in making the colors of the snowman as bright and as saturated as we are able to. Likewise, if we are printing a full-color image to a narrow-gamut CMYK printer, we are justified in printing as much of a full-range of color as we are able.

There are many ways of accomplishing a goal in Photoshop, and this one is particularly straight-forward. The most visually accurate color corrections can be made using the Lab color space. Also of use is the Vibrance tool, layers, masking, and most notably Levels and Curves.

When I am preparing images for commercial press, I eventually manipulate the images directly in the CMYK color space, making the most of that limited range of color. Unfortunately we cannot do the same with desktop printers, since they use different inks than are found in commercial presses, and so they typically require the image to be in RGB format. The Gamut Warning feature is the most powerful tool for this purpose.

Wednesday, December 8, 2010

A CMYK Quiz

HERE IS A sample image, which shows the four channels of a CMYK image. Use your knowledge of CMYK to determine some facts about this image.

If you haven't read them yet, you may first want to read these articles: Color Spaces, Part 2: CMYK and Part Two of "Color Spaces, Part 2: CMYK".

Quiz - CMYK

This sculpture of an acorn is found in Wydown Park, in Clayton, Missouri.

I am convinced that a thorough knowledge of the channel system of digital images is essential for good photography. By looking at a color photograph, you ought to be able to imagine which each color channel ought to look like, and by examining the color channels, you ought to be able to determine what colors are represented.

The CMYK color system represents inks printed on a page, and includes the colors cyan, magenta, yellow, and black. Each channel represents one color of ink. No ink is placed on the page where the channel is white; and where the channel is black, we have 100% ink coverage. For example, a bright cyan-colored object will be black in the cyan channel, and white in the other channels. Where we happen to have roughly equal quantities of cyan, magenta, and yellow ink, the CMYK system will subtract those colors and replace them with black ink. So K (black) will dominate the shadows.

Here is your task:
  1. Identify each color channel in the image above.
  2. There are two colors of flowers in the image. Identify the colors.  We have taller flowers which can be seen in front of the acorn, and shorter flowers of a different color in the foreground.
Unlike my last quiz, I won't give you any clues. Use your knowledge of nature and the channel structure of CMYK to determine the answers.

Saturday, December 4, 2010

The Problem of Resizing Images

IMAGINE YOU HAVE a peculiar boss at work. He wants to make sure that you are at your desk working forty hours per week. So once a day (seven days a week!) at precisely the same time every day (since he is extremely methodical), he peers into your tiny cubical to see if you are at work. You are never there, and he is quite upset. You will hear about this at your next annual review, nine months from now. Sadly, it appears you won't be getting a raise.

Well, he looks into your cubical precisely at midnight every day. Despite graduating with honors in a top M.B.A. program, he really isn't all that bright, and he lacks a life outside of work. As a matter of fact, you do work 8 a.m. to 5 p.m. (with an hour for lunch), Monday through Friday, and you are always at your cubical during those times. But boss marks you down as being absent 100% of the time.

Well, since you actually do your required work, bossman decides to check up on you four times a day. Quite methodically, he appears at your cubical at midnight, 6 a.m., noon, and 6 p.m. As it so happens, you take your lunch at noon, and he just barely misses seeing you every time. You are still absent 100% of the time, in his mind.

Boss is still puzzled. With apparently too much time on his hands, he checks on you eight times a day. Midnight, 3 a.m, 6 a.m, 9 a.m., noon, 3 p.m., 6 p.m., and 9 p.m.  He finally sees you! Since he sees you 2 out of the 8 visits he makes, Monday through Friday, he estimates that you are working at most 2/8 x 24 x 5 = 30 hours per week. He is disappointed, but at least you get to keep your job.

Note that if he visited your cubical three times a day, at midnight, 8 a.m., and 4 p.m., he'd see you twice (the first time just as you got there) and would estimate a working time of 2/3 x 24 x 5 = up to 80 hours per week. But four times per day gives zero. Clearly the frequency of his visits can change the results dramatically.

Your boss's boss likes what he is doing, and asks that he get more data so that he can present an impressive chart at an upcoming meeting. Your boss now checks your cubical 12 times a day. He visits at midnight, 2 a.m., 4 a.m., 6 a.m., 8 a.m., 10 a.m., noon, 2 p.m., 4 p.m., 6 p.m., 8 p.m., and 10 p.m. He sees you working 4 times per day, Monday through Friday, and so he estimates that you work up to 40 hours per week.  If he visited your cubical 24 times a day, or 48 times a day or more, he may (hopefully) notice that his increasing visits didn't give him much more useful data: he would always get a result close to 40 hours per week.

Slightly different

Let's consider a slightly different scenario. Your boss is always on top of the latest scientific theories. He read that the natural sleep-and-wake rhythm of human beings, when not exposed to the cycle of the sun, is 25 hours a day. Always ready to implement these newest findings, and since he apparently never sees any sunlight, your boss now lives out a 25 hour day, although who knows when he actually gets any sleep. Instead of checking your cubical once a day, he checks it once every 25 hours. If he sees you at your desk, he gives you credit for the entire 24 hour day (unfortunately, he has yet to convince his boss to move all the employees to this new scientific schedule). So the first day, he checks for you at midnight, the second day at 1 a.m., the third at 2 a.m., and so forth.

Although he finds you working sometimes four days in a row, he is infuriated that you are (apparently) taking two week (and longer!) vacations at regular intervals. He does estimate that in the long run you are actually working on average 40 hours per week, but he is worried about all the important conference calls you must be missing.

No Common sense

This hypothetical boss, despite being diligent, lacks common sense. This lack of common sense, while being reprehensible in a human being, is quite the norm with digital cameras and with computer software such as Photoshop, although we must credit computer technology with also being diligent. It's hard — no, impossible — to program a computer with common sense, and so we ourselves must make up for what computers lack if we want good results.

Better yet worse

I was delighted when I upgraded from a nice but lowly point-and-shoot camera to a decent, yet inexpensive, DSLR model. Immediately I noticed how sharp my new photos were, as well as having much less noise, even in low light. But there was a problem, and I couldn't quite put my finger on it.  With my old camera, when I was processing images for the Internet, I would simply resize them and add sharpening. Even though there were various re-sizing algorithms available in Photoshop, none seemed to make much of a difference. I did put a lot of effort into using good sharpening algorithms, which made my photos look much crisper without obvious artifacts. But this did not work well with my new camera.

My old process did not work all of the time with my new camera — and the maddening thing was that my results were quite inconsistent — some of my final images looked fine, some were terrible (nature photos were typically the worst). Formerly, when I reduced the size of my images, I had Photoshop set to use the Bicubic Sharper algorithm, which Photoshop says is “best for reduction”, but I found that the new camera's images looked quite rough. So I changed it to use regular Bicubic. This required quite a bit more sharpening than I had used before, and I started using better algorithms that would reduce the bright artifacts I was now seeing, especially around distant leaves on trees and along certain edges. Sometimes I had to manually retouch out some of the sharpness. To me, this is not acceptable, so I started asking around for advice. As it turns out, Photoshop gets it wrong, and does not implement its resizing algorithms correctly.

Hit a brick wall

Digital cameras have ranks and files of pixels, arrayed across their sensor, in precise order, just like the smart-yet-foolish boss in our allegory above. Precisely every x micrometers, a different sensor captures light, just like precisely every y hours the boss would check up on his subordinate.

In the story, you arrive at work at a regular interval, but your nosy boss, if he didn't check up on you frequently enough, would get a wildly inaccurate estimate as to when you actually were present. Only when he checked up on you many times in a day did he get an estimate that was accurate enough.

A similar same thing happens in a digital camera. If there is an underlying, repeating pattern in the scene, the camera may get a wrong estimate of what the scene looks like if you have an inadequate number of pixels to capture enough detail. We see this on initial capture. Here is a section of a photograph, showing textured carpet:

aliasing

The camera did not have enough resolution to capture the repeating texture of the carpet adequately. So we end up with ugly artifacts, here shown by the odd pattern. There were no curves in the texture of the carpet, but the pixels on the camera, being spaced too far apart, got a strange signal. It was not sampling the texture frequently enough. This pattern is called the Moiré effect or an interference pattern, and is a special case of aliasing.

Not only do we see this on initial capture, but this can be a severe problem when we are downsizing an image. Downsizing in business ruins lives, while downsizing in digital photography ruins images.  If there is a repeating pattern in an image, we can get bizarre patterns upon downsizing if we end up with fewer pixels than a particular pattern requires. Here is a detail of a larger image:

Brick building detail

A brick wall. We have a classic, repeating pattern, which will test Photoshop's ability to resize.

First we use Bicubic Sharper, which Photoshop tells us is best for reduction:
Resizing - Bicubic Sharper

Ugh. Bad pattern. Just like the boss who was checking on you at 25 hour intervals (and thinking that you were frequently taking 16 day vacations), we see some bands of the brick wall where the lighter white mortar predominates, and other bands where the dark brick predominates. Also, the rest of the image looks rather rough.

Please note: these sample images are intended to be viewed at 100% resolution. If you are viewing these images on a mobile device, they may be further resized by your device, not giving you an accurate representation.

Now let's try Bicubic:
Resizing - Bicubic

The repeating pattern is still there, but the rest of the image looks a bit better, if soft. Now normally I'd add sharpening to an image like this, but the pattern on the bricks just looks unprofessional.

Bicubic Softer does not help:
Resizing - Bicubic softer

Now lately I've been using the Bilinear algorithm for resizing. The final images, to my eyes, look crisper than Bicubic, yet less rough compared to Bicubic Sharper.  Let's try Bilinear on the brick wall:
Resizing - Bilinear

Interesting. The pattern changed, and maybe it is somewhat less obvious. But still bad. I do like how the rest of the image turned out though: it would hardly need any sharpening at all.

For the sake of completeness, let's try the Nearest Neighbor resizing. As Photoshop says it is best when we want to ‘preserve hard edges’, and since the building has hard edges which we want to preserve, it should look fine, right?
Resizing - nearest neighbor

Nope. Blech. Looks like a zebra.

Note that the big problem we are seeing is due to the regular pattern of the subject, such as the brick wall, coupled with the regular pattern of pixels on the digital image. We do not see Moiré patterns with film cameras and prints: the chemical film grains are irregular shapes and sizes.

But fortunately there is a good general theory to help us out. The Nyquist sampling theorem states:
If a function x(t) contains no frequencies equal to or higher than B hertz, it is completely determined by giving its ordinates at a series of points spaced 1/(2B) seconds apart.
Very roughly speaking, if we take a picture of something that has a regular pattern, if we don't allocate more than two pixels for a repeating element, then we will get the Moiré pattern. But actually it is slightly more complicated than that, since we have three colors of pixels, at slightly different locations on our sensor. In the photo of the carpet above, there is much more Moiré in the red and blue channels compared to green, as we do have twice as many green sensors.

There are also other mathematical effects involved to complicate matters, such as the Nyquist theorem assumes the frequencies are perfect sine waves. A pattern with hard edges, such as the bricks, actually are equivalent to somewhat higher sine wave frequencies.  So some authorities state that for hard-edged repeating patterns such as these bricks, and with a Bayer Array  (where we have separate photo-sites for each color channel), we ought to capture at least three (or maybe up to four) pixels per repeating pattern element to avoid aliasing.

We find the exact same thing when downsizing an image. If the final resampled image does not have more than two pixels capturing each element of a repeating texture in the original image, we will get a Moiré pattern. Because of the complications given above, maybe we need a little bit more, like 3 or so pixels just to be safe. So if our bricks, about 10 pixels apart vertically on the original image, are roughly reduced 1/5 or less in size, then they will definitely show a bizarre pattern, since we are allocating two or fewer pixels per brick. This is what we see in the photos above. I didn't get any Moiré effect when I downsized the image to either 50% or 33% (5 or 3.3 pixels per brick) — and just started getting Morié at 26%, which is about 2.6 pixels per brick.

This is analogous to what is done in the audio recording industry. Young, healthy human ears can hear frequencies up to about 22 kHz, and audio engineers will sample the audio at more than twice that frequency, 44.1 kHz, to avoid audio artifacts like we see in our aliased images.

Boss tries harder

Your boss still wants to keep track of you, but because he has other duties, he attempts to automate the task. He installs a sensor at your cubical door. Whenever you are in your cubical, the sensor records that fact. At the end of a fixed period of time, the sensor resets itself and sends a signal to your boss's office whether or not you were in your cubical anytime during that period. He sets the sensor to send him data every day at midnight. He successfully finds out that you are in the office every weekday. Under your boss's old system, he knew precisely when you were at your desk at a given moment in time, but the new system, while it is less specific, gives him more useful information. In effect, the sensor blurs the boss's data a bit, but he gets better results. With one sample per day, he gets better results than visiting your cubical four times. Were he to sample the sensor more times per day, he would get a much better idea of your attendance than if he were to visit your cubical the same number of times. Maybe he'll find something better to do with all the time saved.

As it so happens, digital cameras incorporate anti-aliasing filters to combat Moiré patterns. This softens the image a bit, but it can lessen the effect that we see in the carpet photo above. Consumer grade compact cameras tend to have heavy anti-alias filters, DSLRs have weaker ones, while medium-format digital camera backs may have none. With the higher-grade cameras, it is up to the photographer to either avoid or correct for these imperfections — although with more pixels, this will be less of a problem.

Blurring is the key

This softening is the key to downsizing images. According to the Nyquist theorem, our samples need to be more than double the frequency of the original signal to avoid artifacts, but when we make an image smaller, we greatly increase the frequency of our patterns. So what we need to do is to blur the image first — before downsizing — so that the Nyquist theorem still holds for our final image. In more technical terms, an image needs to be put through a low-pass filter before being down-sampled — the high-frequency components of the image have to be eliminated first by blurring.

I started getting the ugly artifacts when I reduced the image below 2.6 pixels per brick, and so to eliminate them we need to run the image first through a low-pass filter, which will get rid of any detail 2.6 pixels in size or smaller.

Photoshop does not blur an image prior to downsizing, not even the newest Photoshop CS5. That is why we get these digital artifacts. I would think that this would be fairly easy to implement.

How an image ought to be blurred prior to downsizing is a mathematically complex subject, and certainly the optimal blurring algorithms are not found in Photoshop. But we could experiment with Gaussian Blur, although choosing the Gaussian radius may be a bit problematic.

OK, so we want to be sure that we don't have any frequency components of our bricks being any less than about 2.5 pixels per brick in the final image. I initially choose to apply a Gaussian blur with radius 2.5 before downsizing.  This is a quite naïve start, and so I did blurs in various steps:

Resizing - 2.5 blur - sharpened

Radius = 2.5. Just for fun, I used the Nearest Neighbor resizing algorithm, which gave us the horrendous zebra stripes seen above.  It doesn't look too bad, does it?  I added 50% Photoshop Sharpen to these images to make them look a little better. Better sharpening is called for however.

Here are other Gaussian blur radii:

Resizing - 1 blur - sharpened

Radius = 1.  We still have severe aliasing.

Resizing - 1.5 blur - sharpened

Radius = 1.5.  Still some aliasing.

Resizing - 2 blur - sharpened

Radius = 2. Some very faint aliasing; otherwise this is a good image.

Resizing - 3 blur - sharpened

Radius = 3.  Too soft.

Ok, for sure we can get rid of aliasing when it obviously appears on an image like this one. But this may not be optimal, for the final image appears a bit too soft. One trick I've used is to blend together two copies of an image, reduced using different algorithms. In this case, I'd select the anti-aliased part for the bricks, with a normal downsize for the rest of the image.

However, anti-aliasing may help images even without an obvious pattern such as this. I recall that I often get poor resizing results, particularly with distant leaves against the sky, and along certain edges. Perhaps using even a soft blur will help with these images.

But we really ought to be using better algorithms than Photoshop offers. Very many algorithms are implemented in the free ImageMagick command-line utility, and in-depth discussions are here and here. For downsizing, they recommend the Lanczos algorithm for photographic images. It properly does blurring before reducing, although it does not use the optimal blur algorithm, for the sake of good performance. Using that software, I resized the brick building:

Resizing - Lanczos

Lanczos still has a bit of Moiré, so I'm a bit disappointed. Otherwise it looks pretty good, and is much better than any image found above.

I tweaked the processing a bit and got this:

best effort

I blended the above image with a version that I blurred before downsizing. I masked out the building in the unblurred layer, giving us this composite.

Apparently there are some other, better algorithms available, but they are computationally expensive, or difficult to fine-tune optimally. However, whichever resizing algorithm you use, it is important to sharpen the image afterwards to bring back some crispness to the image.

Wednesday, December 1, 2010

Part Two of "Color Spaces, Part 2: CMYK"

IF YOU TRY to invent a new language, like for example Esperanto, you won't get far if your new language has nouns but no verbs. Likewise, if you invent a new color system, you won't get far if you don't include the common primary colors, as well as black and white.  Your color system doesn't have to cover every conceivable color, just like every human language does not need to have words to describe quantum physics. You just need the basics.

The CMYK color system is used by commercial presses, as well as by inexpensive desktop printers. CMYK is not a very broad system of color, but it has the basics, and is suitable for most printing purposes. Using cyan, magenta, yellow, and black inks, these printers can output a smaller range of color than even the sRGB standard (used by most cameras, computers, and HDTV), which in turn only displays about 35% of all possible colors. But CMYK can print all the basic classes of colors, with a nice, continuous gradation between these colors.

I am convinced that a thorough knowledge of the color structure of images is needed for quality photography. At a bare minimum, a photographer ought to know about the three color channels delivered by the camera — red, green, and blue — and how the RGB channels work together to represent color.  You should just be able to look at an image, and imagine with your mind's eye how each of the channels ought to look.  And by looking at black and white representations of the channels, you ought to be able to estimate roughly what the various colors are in the image. Using the “by the numbers method”, you ought to be able to know if your colors are correct just by examining the RGB values — even if you are color blind. See my article, Color Spaces, Part 1: RGB for an introduction to this color system.

But it is nice having printed output, instead of just viewing pictures on a screen. If you are fortunate someone might be willing to pay you to print your photos in a book or magazine, or you may make prints for clients. If you want to do an excellent job with printing, better than typical, then having an understanding of the printer's color channel structure is also essential.

RGB output assumes three primary colors on a brightly lit screen — you illumine red, green, and blue lights which mix together to produce a broad range of colors, including black and white. The more light illuminating the screen, the brighter the picture. CMYK, on the other hand, places four colors of ink on a page, and the more ink on the page, the darker the image.  Fortunately, once you know RGB, moving to CMYK is is quite similar.  See my article, Color Spaces, Part 2: CMYK for details.

The key is the opponent color system. Some colors, when mixed, produce other colors, like green and red lights shining together will produce a yellowish light; or when you mix cyan and magenta inks together you get blue. However, when you mix opponent colors together, you get gray.  The RGB and CMYK color systems use colors that are opponent to each other: red is the opposite of cyan, and so on, and so the red channel will look quite similar to the cyan channel, while green will look similar to magenta. The major difference is the black channel in CMYK, which will have much of the shadow detail (by the way, RGB has very little color information in the shadows, and CMYK beats it in that department).

Let's examine how colors mix in the CMYK system (see the RGB article for analogous images):

Cyan versus magenta

Here, we simulate an increasing amount of cyan ink moving across the image from none on the left to 100% coverage on the right. Likewise, we have an increasing amount of magenta ink from 0% at the bottom to 100% at the top. No yellow or black ink is shown. We have white at the lower left-hand corner, and somewhat purplish-blue at the upper right hand corner: if we draw a diagonal in between those corners, we have a purplish-blue color going from fully saturated, to a pastel, to white.  Along the upper side of the image we a gradation between magenta, purple, and blue, and the right-hand side we have cyan merging into blue.

Please note that this is simulating ink on a page. The red outlined region, in CMYK, is actually outside of the color gamut of the sRGB color system used by this image. When I converted the image from CMYK to sRGB, Photoshop chose the closest sRGB color to represent what was found in CMYK. As it so happens, we can get better, brighter, more saturated cyan inks than what can be shown on most computer monitors: what you are seeing here is actually a bit duller than can be printed.  (Perversely, if you were to print this image, some of these sRGB colors are themselves outside of the CMYK gamut, and the quality would degrade even further. Color management is complex, and frustrating.)

Most critical for photographers is the sky-blue colors along the middle of the right hand edge.  These colors are outside of the gamut of sRGB — but are well within the range of CMYK. When skies are particularly deep in color, such found at high altitudes, or during a brilliant, clear winter day — especially when you use a polarizing filter — your sky will be out of the sRGB gamut, and will exhibit lots of noise (and this noise will be exaggerated by JPEG compression). Examine your red channel: if it is black, then you know the sky is out of gamut; but if you carefully process your photograph, starting with a RAW image and never entering sRGB, you still might be able to get a clean printed sky.

Cyan versus yellow

Here we have cyan ink going across, and yellow ink going upwards. There is no magenta or black ink. These two colors mix together to produce green.  We have various colors of leaf-green going along the top edge, and ocean-green along the right edge.

Our outlined gamut warning areas show that we can get a bit better yellow on printed output than we can on a display. Recall that yellow is the opponent color to blue, and digital cameras do a very poor job of capturing blue colors, particularly at low light levels, which may translate to a somewhat poor yellow. But if you capture an exceptionally clean image in the blue channel, you can convert your RAW image to CMYK and use the high-quality ink to get a slightly wider range of yellow in your final image, especially good pastel yellows which are hard to come by in sRGB.

Far more problematic are the green colors: CKMY does a much better job with certain shades of green compared to sRGB. Again, this is a simulation, and were I to have printed the original CMYK file, the color differences would be rather striking. Especially problematic are most shades of ocean green, as well as some shades of leaf-green. This is an interesting observation: CMYK technology, which is quite venerable, does a better job with the natural colors of the sea, sky, and land compared to the computer standard sRGB color system. Also, flesh tones — especially for Scandinavians and Africans — can easily go out of the sRGB gamut. The computer standard was developed before digital photography became widespread — when computer graphics were more concerned with simple business and scientific diagrams — and so was not fine-tuned for common natural colors. However, sRGB does a better job with pure bright reds and blues.

Magenta versus yellow

Magenta going across, yellow going up. These mix together to produce red at the upper right hand corner. They don't mix to produce a really good, bright red as we see in sRGB. But we see that CYMK produces better yellows, and some oranges.

Cyan versus black

Cyan across, black up. Here we finally mix in some black tones, and clearly CMYK is the winner with cyans — and especially dark cyan colors. Recall that sRGB will often throw blue skies out of gamut, which the bright primary cyan ink and the black channel here makes up for quite nicely.

Magenta versus black

Magenta versus black. CMYK wins with dark magenta tones.   Again, remember that you really can't see the actual effect of ink blending on your monitor — the real result is darker and richer.

Generally, RGB color models can be poor because they don't allocate much information to the shadows. There are very few variations of colors that are darker than the blue primary — only about 1% of all allocated colors, which can be seen in another article. Shadows in general tend to be poor, and the number of dark colors are severely limited. CMYK makes up for this by allocating much of its gamut to dark colors.

Yellow versus black

Yellow versus black. If you studied the previous charts, you probably guessed what this looks like.

CMY versus black

Here I mixed 100% each of the three colors across the image, while I added black going up. You ought to notice on the lower half of the image that the mixture of the three colors is not precisely gray, rather it has a slight reddish tone, which tells us that the cyan ink is a bit deficient. (Cyan and red are opponent colors — more of one means less of the other.) With RGB, you merely make all three values equal if you want a pure gray tone, but with CMYK, cyan has to be a bit stronger. For this reason, doing a white balance in this color system is a bit more complex.

There are two reasons why printers have a black ink. One is that a CMY mixture is dark gray at best, and the other is the fact that too much ink on a page can cause smearing or other defects. The black ink can nearly replace all of the colored ink in the darkest shadows, using merely 1/3rd of the amount of total ink.

In the article Imaginary and Impossible Colors, I showed how three numbers are sufficient to describe any color visible to the human eye. CMYK uses four colors, which means there is often more than one way to specify the same color, by trading-off CMY for black. Printers consider the black plate to be the most important, and photographers creating CMYK separations of their photographs ought to study the trade-offs very carefully. You can also manipulate the K channel in Photoshop — it is an excellent place to add sharpening, local contrast, and nice steep curves for rich shadow detail, but you might inadvertently remove some color.

If you are sending your images to a desktop printer, do not use the CMYK color system. Rather, process your images in a wide-gamut RGB color space, such as Adobe RGB or ProPhoto, and set Photoshop's gamut warning to either CMYK or preferably the printer's own ICC profile. This will allow you to get the rich, deep colors, and to fully express the colors of nature, but avoid bright reds and greens which cannot be printed. Be aware that out-of-gamut colors will either translate to noise or to flat, muddy colors. Understanding CMYK will let you know what to expect.  If your printer uses more than four colors, then you are quite fortunate as you can get richer, purer colors — install the printer's ICC color profile in Photoshop and process your images using that profile as your gamut warning. Be sure to use a wider-gamut RGB colorspace for your processing. Since you will be working with colors which likely can't be displayed on your computer monitor, you ought to take the leap of faith that your images might actually look better when printed than what you see on the screen — just keep a close eye on the numbers and on the gamut warning.

If you are sending your images to commercial press, you will want to study CMYK further, or just take your chances and let the pre-press folks do the conversion for you.


You don't have to convert your image to CMYK in order to see what amounts of inks your image would use. You can configure Photoshop's Info panel to display CMYK values for the eyedropper tool. If you are attempting to set a particular color to the brightest possible CMYK red value, you can set an eyedropper on the color and keep an eye on the CMYK values as you adjust your image: you want to set magenta and yellow near 100% with cyan and black low.

CMYK values are also used to correct an image for good skin color. Humans of all races have a cyan channel that is less than the magenta channel, and a magenta that is less than yellow. Black can be nearly any value depending on race. Be very careful not to adjust skin tones so much that they go out of gamut — that is particularly noticeable.

Read part one of this article on CMYK: Color Spaces, Part 2: CMYK
And here is my article on RGB: Color Spaces, Part 1: RGB
If you are confident that you understand CMYK, try this: A CMYK Quiz
For color spaces based more closely on human vision, see this: Color Spaces, Part 3: HSB and HSL, and Color Spaces, Part 4: Lab.

Monday, November 29, 2010

Color Spaces, Part 2: CMYK

THREE NUMBERS SUFFICE. If you desire to mathematically describe or represent any color seeable by the human eye, the simplest and most well-ordered models will include exactly three numbers, no more and no less.

But please notice that I wrote that three numbers suffice. Normally we think of color theory in terms of mixing colors: for example, computer monitors typically have three kinds of dots, each a certain precise shade of either red, green, or blue. Various mixtures of these color dots at various intensities will produce all the shades of color viewable on the screen, from dark gray or black, to white, and with a rainbow of colors throughout. Alas, although we can accurately characterize every known color by three numbers, we cannot mix all known shades with three primary colors. Three colors do not suffice, and this is the color gamut problem. If you choose three colors for your primaries, then no matter which colors you chose, there will still be colors that you are unable to mix.

For more information, see my article on imaginary and impossible colors.

For reasons of cost and practicality, most color devices use just three colors, and these provide a limited gamut of colors. Most computer monitors and High-Definition televisions use the sRGB color gamut, which can display about 35% of possible colors. Expensive high-gamut monitors can approach 50% of possible colors. Generally missing in these color output devices are rare colors such as scarlet and Imperial purple. Good cyans and some greens are also missing, but the system is generally adequate for most uses.

In the RGB color system, red, green, and blue lights are mixed together to provide a wide range of colors and shades.  But we cannot use only red, green, and blue inks on a page to produce a similar range of colors. See my article, Color Spaces, Part 1: RGB, for examples of how these additive colors work together: for example, if you shine a red and green light together, you will get a bright yellow color, but if you mix red and green paints together, you will get a dark muddy mess. You cannot get a bright color by mixing RGB inks. Mixing saturated colored lights together will always produce a brighter color; mixing saturated colored paints together will always produce a darker color. So when we put ink to paper we have to use a subtractive color system, which chooses pure light primary colors for mixing.

Recall the discussion in the RGB article about the opponent color relationships. These are opposite color pairs, which produce shades of gray when you mix them, and not a unique color.
Red is opponent to cyan
Green is opponent to magenta
Blue is opponent to yellow
And since we are working with ink on paper, I might add:
White is opponent to black
The three primary colors in the RGB or additive color system are red, green, and blue, while the three primary colors in the CMY or subtractive color system are cyan, magenta, and yellow. RGB and CMY are therefore opponent to each other.

Consider the following image:

Broemmelsiek Park, in Saint Charles County, Missouri, USA - red berries against blue sky

We have red berries against a blue sky. In the RGB system, the red berries will be bright in the red channel, and dark in the green and blue channel, since pure reds have little to no green or blue in them. If we examine the color channels separately, we see this:

Red berries - RGB

Red berries are almost white here, because in the RGB color system white is a strong color, while black means the absence of a particular color. Since the berries are nearly pure red, they are white in the red channel, and black in the other channels. Since we have a nice blue sky, the sky is suitably lightest in the blue channel; and since midday blue skies tend towards cyan and not magenta, the green channel is brighter than the red.

The CMY color system works a bit differently. White indicates an absence of ink, while black means that a particular ink has 100% coverage. So a part of an image where all three channels are white means that no ink is put on the page, and so the white color of the page shows through. The same image in the CMY color system is this:

Red berries - CMY

Red berries have no cyan color in them, and so are white in the cyan channel. Magenta and yellow ink mixed together makes red, so the berries are dark in both those channels.  Likewise, blue skies have little or no yellow in them, and so the sky in the yellow channel is light, and is dark in the cyan channel, meaning there is lots of cyan ink there. Cyan plus magenta equals blue, and since our cyan channel is darker than the magenta, the blue sky will properly be a greenish blue shade and not purple.

Please note that the RGB and CMY channels look nearly identical. Working in the CMY color system is hardly different than working in the RGB color system because of the opponent colors used. Please note that the channels are not identical because the printing and television industries use slightly different color standards. But they are close.

Recall the discussion above about limited color gamuts, and how three primary colors cannot produce the full gamut of colors visible to the human eye. Computer monitors really have it easy, since they have powerful back-lighting which can produce bright colors much brighter than the artificial illumination typically found indoors. But poor printed pages do not have that advantage: the brightest tone available will always be the paper itself, and that paper will be duller than the room lighting. And so, printed output will generally have a poor color gamut.

But we can expand the color gamut if we add more colors of ink. Full-color printing always adds at least one additional color to expand the gamut, and in commercial printing, that color is black:

Color space example - CMYK

The standard cyan, magenta, and yellow inks used in the printing industry really don't mix together well to make a good black, rather they look muddy. Another problem is that commercial printers have what is called an ink limit: some presses just can't have too much ink on the page without causing problems, and so printers will insist on a limit to the total ink coverage on any given spot on the page. Some shoddy printing may even have an ink limit of 240%, which means that you can't mix together full coverage of our three colored inks, since that would gives us a 300% coverage, which is over the ink limit. 100% black ink will replace 300% colored ink, which is quite a savings, and the black ink looks much better than an equal mixture of colors. Adding black expands the color gamut of the printed page, and does it while ensuring a cleaner press run with less chance of smudging ink.

Here is our image in CMYK (where 'K' means 'key' or black):

Red berries - CMYK

The twig is dark brown, and much of its tone now comes from the black channel, as do the shadows. See also how Photoshop removed ink from the color channels: shadows there are now a medium gray.

The CMYK color gamut is considerably smaller than the sRGB gamut most often used in the computer industry and by digital cameras. However, the CMYK gamut is not completely contained within sRGB: printing can produce better cyan, magenta, and yellow, whereas sRGB produces better red, green, and blue.

We can expand our color gamut by adding colored ink. In the printing industry, these are called spot colors. If you don't think that the standard color mixtures are good enough, you can pay the printer to add spot colors. Be aware that this can be quite expensive, and is typically only used for the finest work. Were I to use an accurate spot color for the red berries, they would then become white in the CMY channels — since most of the color would be transferred to the new spot channel.

Cheap computer printers use the CMYK color system. Quality computer printers will have more than three colors, and there are some models that use ten colors. But if you use a desktop color printer to output photographs, be aware that you will be paying several dollars per page for the ink alone. Your costs may be fifty times higher than what a commercial printer charges in bulk.

For a further discussion of CMYK, click here for part 2.
If you think you understand CMYK, then take A CMYK Quiz.
For an overview of the RGB color system: Color Spaces, Part 1: RGB
For color spaces more natural to artists, see Color Spaces, Part 3: HSB and HSL.
For a color space designed to be visually uniform, see Color Spaces, Part 4: Lab.

Thursday, November 25, 2010

Quick Tips for Food Photography

  1. Shoot quickly — food fresh out of the oven or refrigerator looks better.
  2. Use natural sky lighting. Food often lacks definition, so a small or fairly distant window can produce good shading. Generally, you want the light to provide sharp, well-defined shadows to enhance the texture of the food. You may have to use fill-in reflectors, otherwise color and texture will be lost if large areas of shadows are too dark. Aim for a 1-to-2 E.V. range between large lit and shadowed areas: of course, dark shadows under a plate for example are completely acceptable, just not on the main parts of the food itself.
  3. Avoid using the camera's own flash. Avoid mixing natural and artificial lighting, unless both have a close color balance. Authorities in food photography state that it is difficult to use artificially lighting well:  when they do use it, they prefer small, distant light sources to provide sharper shadows.
  4. Set your exposure and post-processing so that you get good highlights on the food: having a full exposure range will also bring out the colors of the food (in Photoshop, using Levels or Curves in RGB mode will enhance color). Be sure that you don't overexpose too large of areas because you might get muddy color shifts. It is OK to overexpose specular highlights.
  5. The color of food is very important to make it look appetizing. Be sure to do a good color balance. Contemporary food photograph seems to prefer a slightly cool color balance, while traditional food photography preferred slightly warm: both look good, as long as the color balance is close to neutral.
  6. Use props to good effect, such as tablecloths, utensils, glasses, napkins and shakers. But be aware that the food itself is the main subject and shouldn't be overwhelmed with secondary items.
  7. Contemporary food photography uses very shallow depth of field, and prefers lenses with excellent bokeh or background blur. This is tricky to do right, for you have to judge the correct focus point. While I think this effect is attractive, perhaps it is a bit overdone. Some use tilt/shift lenses — or even bellows cameras with these motions — in order to precisely control the plane of focus.
  8. Food photography is essentially still-life photography. There is an immense body of work in still-life, particularly with painting. Do some research and use still-life theory to good effect.
  9. Your image may not look like you remember seeing it, due to the dim-light adaptation of the human eye. In particular, your image and texture may look a bit flat. In this situation, food photos may benefit from having the blue color channel blended into the image to give greater contrast to specific colors. See my articles on the Purkinje Correction.
  10. Get low. Typically, we look down on food at about a 45 degree angle; this might not be best for getting a good shot. Get a bit lower.
  11.  Check your background. Be sure it doesn't detract from the food, which is your main subject. Classical still life preferred a black background, while contemporary food photography likes a white or pastel background, completely out of focus. You don't want the eye to be distracted by the background in most cases. Alternatively, your photo may only show the table top.
  12. Food benefits from extreme lens sharpness. Macro lenses are particularly prized for this sort of work. Use a sturdy tripod and focus carefully. In post processing, use good techniques to preserve and enhance sharpness.
  13. To give a good perspective, most food photographers use a slight telephoto lens for this work, and set their camera several feet away from the food, and six feet would be better. If you have a stylist, be sure there is plenty of room for working between the camera and the subject.  The wider angle the lens, the more area you have to control for your photo: but some have employed wide angles and great depth of field to portray an entire kitchen along with the food.
  14. Food styling — that is, preparing the food itself to look good in photography — is a an advanced specialty, and can be quite involved. I recommend the book Food Styling: The Art of Preparing Food for the Camera by Delores Custer.
  15. My food photography can be seen in the book Thursday Night Pizza, by Fr. Dominic Garramone. Click here to see larger photos of the pizzas: these photos were taken from directly above with no photo styling, per instructions from the publisher. Otherwise I used natural sky lighting, reflectors, and accurate white balance. I used an antique Nikkor 55mm f/3.5 Micro lens for sharpness, with the camera being located about six feet above the pizzas.

Wednesday, November 10, 2010

Photoshop Wishlist #1

I AM CURRENTLY evaluating Adobe Photoshop CS5 on my computer, and have 18 days left until the trial copy expires. For the most part, I am delighted by the product, and see many improvements over my old CS3 version. It does not require that much additional computer power — and sometimes it uses even less, since it uses the graphics processor and memory to do tasks once reserved for the main processor.

Photoshop is a venerable, highly developed and nuanced product, and like any complex, actively developed system that's been around for a long time, has many features which see little use nowadays, as well as the refinement to be able to do important things very, very well.

However, a highly developed system may find it difficult to adapt to new conditions, having been optimized for previous conditions. Photoshop has its roots as a raster image processor primarily for graphics arts professionals, and is well-known as a good platform for doing digital art, with its excellent support of many paintbrush-like tools for creating images from scratch. But it is also used in photography, as its name suggests. I am beginning to see some limitations of its photographic capabilities, and one major limit is that images are always strictly bound to an output medium.

For most Photoshop users, this limit means that you edit your images in the sRGB color space, with eight bits per color channel. That isn't too bad, and this is an obvious approach for 90% of all users: after all, that is the standard format used by most cameras and Internet web browsers. Certainly you would want to edit a file in the format which the camera delivers and what your computer can display. Photoshop does things the way it ought to be — right?

I see some problems with this. Each color channel has a maximum value of 255, a minimum value of 0, and we can use only integer steps between: 1, 2, 3, and so forth, with no intermediate values. This lack of precision is of little consequence to most users, and if you do need greater precision — for example, if you are applying severe curves to your image — then certainly you can use 16 bit mode (as I do) to increase the number of possible values. This extra precision helps avoid digital processing artifacts such as banding, and also lets you get better shadow detail.

CS5 has a great improvement over CS3 in that it allows far more operations on 32 bit images, giving us great precision in image manipulation; I haven't tried it yet, but look forward to experimenting with it.

But that isn't good enough. I'd like to see fractional RGB numbers. I want RGB values greater than 255.  I want negative RGB numbers. But this is madness! You cannot display an image with RGB values greater than 255! And what on earth are negative RGB values? Those are clearly impossible, there is no such thing as negative light!

But remember that I stated that in Photoshop images are always bound to a specific output medium, which for most photographer users is probably 8 bit sRGB. While clearly I do eventually want an 8 bit sRGB image, while I work on processing an image, there may be times when my intermediate files will be out of that gamut. And I do process my images mainly in the wide ProPhoto gamut — or in the ultra-wide L*a*b colorspace — with 16 bits per channel to overcome the limits of sRGB, at least temporarily.

Do not think of processing images as a step-by-step process, where each increment produces a superior image.  Sometimes you have to make an image look worse before you can make it look better. I propose making images so bad that they are impossible to print, or even view accurately on your computer monitor — at least temporarily.

For example, when I apply a severe curve to an image, anything that ought to go over 255 is set to 255, and so we lose information and image detail. However, if its value ought to be 300, I want it to be 300, even though it is out of the gamut for the time being.  If I tell Photoshop to make an image twice as bright, I want the entire image to be twice as bright, without worrying about losing highlight detail. I will deal with the gamut when I need to deal with it, which is when I'm preparing the final image for print or web display.

I often add together multiple images to make a final image. What I have to do is apply an opacity to each layer (which is like doing division) to get my final result, but certainly there must be severe rounding errors, and we are losing tremendous amounts of detail in the shadows as a result, which is a bad thing, and especially since digital photography is known for often having terrible shadows. What I would like to do is be able to add together images with impunity. Image addition, which is called Linear Burn in Photoshop, has a maximum value of 255, but if the final value of all this addition ought to be 500, that is what I would like to see.

Generally speaking, I would like to see in Photoshop a pure kind of image algebra, where we can do all sorts of operations on images in a way that follows the standard rules of arithmetic, such as add, subtract, multiply, and divide, as well as other more obscure operations such as exponentials. To do this accurately, we can't have the hard cutoffs of 0 and 255, nor should be we limited to mere integers.

This brings us to negative RGB numbers. These in fact can represent real colors. For example, if you work in a narrow-gamut color space similar to sRGB, and you want to represent a real color outside of its gamut, you can mathematically represent this if you are willing to allow at least one RGB number which is negative or greater than 255. So a negative RGB does not mean negative light, but rather that it is merely an out-of-gamut condition. If we are allowed to use negative numbers — and numbers greater than 255 —  then we will be able to represent all colors while still using a system that is otherwise identical to our narrow-gamut color system. This system will remain relative to a particular gamut, while not being limited to that gamut.

This has many benefits to a careful Photoshop user. If you work in the ProPhoto or Adobe RGB color spaces, and I know many people do, how then do you know that a particular color is out of sRGB gamut? Certainly you can turn on the Gamut Warning feature (I use it all the time), but how can you create a mask for this sort of thing? Can you tell, just by looking at an RGB value, that it is out of gamut? By using large and negative numbers, we can then precisely identify what is out of gamut simply by the numbers: is it greater than 255 or less than 0?

I often attempt to brighten shadows, and try to add lots of local contrast so that dark areas of an image still appear to be dark to the eye, yet in fact are not all that dark, and instead show lots of detail. This is often impossible to do well due to the arithmetical rounding errors found in low RGB values, which is a contributing factor to noise.  Ideally, a numerical representation of RGB would give equal precision to all levels of perceived brightness, but that is not what we currently have, as can be seen in the illustration below:

number of colors by brightness

Most of our current systems of numerically representing color are biased towards midtones, particularly saturated green and magenta tones, while offering a paucity of dark and bright colors. This gives us the risk of banding in our final image.  Having fractional RGB numbers would alleviate this problem greatly, and though we can use 16 bit images, having fractional values would give us a better guarantee of processing shadow values — and highly saturated dark colors — to avoid rounding errors.  I've noticed that 8 bit sRGB in particular handles navy blue rather poorly, which is a pity, for that is my favorite color. We always risk banding when we have large areas of dark blue, as is often found in brilliant deep blue winter skies, especially when using a polarizing filter. We see the same problem with bright yellow colors.

If you take an 8 bit image and convert it to 16 bit, Photoshop multiplies the RGB values so that they fill the new numerical representation. So a value of 255 will be converted to 32768, which is the maximum 16 bit number. In the 32 bit system, which uses floating point numbers, 255 is converted to 1.0, which is the maximum value allowed in that system: all smaller RGB values are some fraction less than 1.

Instead, I propose an alternative method. When you convert an 8 bit image to this new system, all values remain unchanged. The difference is that your values can, after processing, be greater than 255, less than 0, or some fractional number. With this system, you can be very careful, and never allow your image to go out of gamut, or you can edit to your heart's content and worry about gamut later. If you edit an sRGB image in the sRGB color space, your image may want to go out of gamut and you will never know it, except that detail will disappear.

There are a few problems with my system.  First, you can't see the extra colors if you don't have a wide gamut monitor, but we already see this problem when working in the Adobe RGB or L*a*b color space. The other problem comes when we want to convert a high-precision image back down to 8 bits.

The key to working with images in any gamut is to do by-the-numbers processing, and have a thorough understanding of the channel structure of the images. Instead of merely determining if an image looks OK on your screen, you instead measure an image to be sure the colors are right. Calibrating your images is more important than calibrating your monitor.

Converting an image back down to an output format like 8 bit sRGB is more problematic, but take a look at Photoshop's own conversion options from 32 bit images.

However, doing something like this may not work well within the Photoshop product, as it would require a major redesign of many features. However, I do think that it would be quite useful for accurate image processing.

Tuesday, October 26, 2010

Black and White

OCCASIONALLY YOU SEE, on the dpreview.com forums, a posting questioning the use of black and white in contemporary photography. The critic — almost always apparently an educated, brash young man — will declare black and white photography obsolete, for it merely was a product of historical forces, ignorance, and technological compromises, and so it has no relevance to us today; he states that black and white photography is something that ought to be abandoned and forgotten.

This is of course the error of historicism, which in its extreme view denies any universal laws or truths. An opposite error idealizes all situations according to a simplistic theory, and ignores the inherent messiness of life. Most of us bounce back and forth between these two extremes, but rather let's find the virtuous middle and attempt to find out what black and white photography is about.

I can see various reasons for either shooting black and white film or doing digital black and white conversions.
  • Just because you like it.
  • Cost, convenience, or necessity.
  • Nostalgia. 
  • Technical advantages. 
  • For aesthetics or mood. 
So why should we still produce black and white photography? Let's consider these individually.

Just because you like it

OK, why do you like black and white photography? Contemplate the reasons why you find it appealing. Perhaps it is some combination of the following?

Cost, convenience, or necessity

Suppose your photograph will be printed in a newspaper, bulletin, flyer, or other inexpensive black and white medium. You may prefer your photo being printed in full color, but since that is not going to happen, you do the best you can despite this limitation.

Saint Louis University, in Saint Louis, Missouri, USA - Museum of Contemporary Religious Art at dusk (black and white)
Museum of Contemporary Religious Art, at Saint Louis University. I needed to convert my image, originally in color, to black and white for inclusion in the book Saint Louis University: A Concise History

If you shoot film, and have your own darkroom or film scanner, black and white photography remains rather inexpensive since you are able to develop and print your own quality photos. If you live in a remote area, this may actually be the most convenient solution also. Also, superb quality film cameras are available at low cost. While you can process your own color film, this is rather more expensive and difficult compared to black and white film.

Sometimes, the lighting conditions are so poor that a black and white conversion is the fastest and easiest way to produce a quality images. I will sometimes convert an image taken under sodium vapor lights to black and white, because the color of that lighting is usually unpleasant and detracts from the beauty of the image. I very often do a conversion when I use extremely high ISO or severe curves on an image, both of which produce intense noise.

Nostalgia

Do you pine for a time when style of dress and manners were better? Times that were happier, even if more difficult? Do you feel a twinge of romance when viewing those older things? Then perhaps you like the nostalgic look of black and white photography.

[Portrait of Doris Day, Aquarium, New York, N.Y., ca. July 1946] (LOC)
Doris Day, singer and actress, ca. July 1946, New York City. Photograph by William P. Gottlieb.

I must admit to being a bit undecided if this kind of nostalgia is desirable or not: on one hand, this kind of nostalgia is pleasant.  But ought we not prepare for the future, where we are inevitably headed? Or rather, ought we live our life in the present, the only time we can truly see? On the other hand, escaping from the drudgery of the present by an imaginative look in the past is sometimes necessary.

We must not fall into the trap of believing the doctrine of inevitable progress, the idea that things are always getting better and better. And likewise, we must not distrust those who prefer older things; for they may not be reactionaries, but rather they might be correct. The theory of evolution implies eternal betterness, but in reality, for every advancement there are multitudes of fatal mistakes. So what we call nostalgia may very well be a rational attraction to things that were in fact better in some way.

Technical advantages

We must be humble enough to realize that older technologies might actually be better in many ways. A large format camera, with quality black and white film, expertly exposed and processed, will have a range of tones and detail that far exceeds any DSLR snapshot. I do use digital photography exclusively, since it is so convenient, but there are trade-offs.

One of the great advantages of black and white photography is the wide contrast range possible. Often in color, it is difficult getting a full range of tones from pure black to white, since your brightest significant detail may be a saturated color — you can't brighten it without losing saturation. This is particularly troublesome if your brightest color is a pure blue -- you just don't have that much room for other tones unless you do severe edits to the image. Blue skies are often a problem: you can't brighten the foreground without risking overexposure of the sky (which will damage the sky color), which is one reason why polarizing filters are so useful.

Holy Family Log Church, in Cahokia, Illinois, USA - exterior at dusk 9 (black and white)
I inadvertently overexposed the sky on this image, turning it into an implausible shade of cyan: but it looks fine when de-colorized. This is Holy Family log church, in Cahokia, Illinois.

With black and white images, you only have to worry about over- or under-exposing one tone: white or black. With color images, you need to worry about three color channels, any one of which may be poorly exposed, harming the final image. With color, we have a far smaller dynamic range, which is why color images benefit from fairly flat lighting.  On the contrary, the masters of black and white photography use the increased dynamic range to excellent effect.

View of Gateway Arch from Laclede's Landing - original color
I took this photo for a book on the Gateway Arch. This is a merge of numerous exposures, and the camera was set to automatic white balance. This is a terrible image in several ways, and the yellow sodium vapor lighting is particularly objectionable.

High efficiency electric lighting often has poor color; fluorescent lights are quite bad, due to the unattractive and broad range of green-to-magenta tones found. Sodium vapor lighting, with its narrow yellow-orange color leads to extremely poor color photos. In these cases, a black and white image may be superior.

View of Gateway Arch from Laclede's Landing - black and white
The same series of images, but I converted them to black and white before blending. I did some additional processing on this image, such as applying curves and sharpening. In my opinion, this isn't an image I'd particularly want to see in print, but I do think it is an improvement.

Digital cameras have a linear sensor that respond to light such that twice the brightness registers as twice the signal. Unfortunately, this means that most of the sensor's data is clustered around the very brightest of objects, and there is always a great risk of losing detail through overexposure. This also means that most of the tonal scale will be represented with very little data, which leads to lack of detail and noise in the shadows. So the general advice for digital is that you expose for the highlights and post-process to improve the shadows. Black and white film technology, on the contrary, is known for having great shadow detail — you expose for the shadows and post-process to improve the highlights, and unlike digital, it doesn't have a hard cutoff at the ends of the tonal range. Black and white film is traditionally very good for photography in dim, highly contrasty lighting and is used to good effect in film noir.

Grant's Trail and Gravois Creek Conservation Area, in Saint Louis County, Missouri, USA - unprocessed forest scene
A forest scene, at Gravois Trail, in Saint Louis County, Missouri. This hand-held photo is underexposed, and was shot at ISO 3200. There is hardly any visible detail, and brightening the image would reveal extreme color noise in the shadows.

Grant's Trail and Gravois Creek Conservation Area, in Saint Louis County, Missouri, USA - forest scene converted to black and white
The same image, converted to black and white — I discarded most of the red and blue channels. I brightened the image greatly, and applied some noise reduction and sharpening.

Digital noise is most evident in the shadows, and color digital noise is usually ugly and highly undesirable. On the contrary, black and white noise is far less objectionable, and can even improve an image, giving an impression of texture and sharpness. This is often an advantage when shooting at very high ISO, or when brightening a severely underexposed image: a terrible color image can often be dramatically improved by converting it to black and white.

For aesthetics or mood

While nostalgia seeks the better things from the past, and black and white photography may evoke that nostalgia, we must always remember that reality in all ages past is high-resolution, wide-gamut, high-dynamic range color. Would a master photographer of a bygone era have used color photography if it were technologically feasible? Was his mastery of the black and white medium merely making the best of an unsatisfactory situation? Undoubtably for many, although this is speculation. We do in fact know that color technology was eventually widely adopted, and also that black and white never went away.

Color is an important factor in beauty. Bright colors are pleasing, and there are many studies and theories of the psychology of color which assigns good, desirable effects to the various colors. However, the color black is nothingness, the color white can be blinding, and gray is dreary: black and white photography necessarily is less cheerful and pleasant than color. Since black and white is more abstract than color, it can also suggest mystery.

Statue in cemetery - heavily processed
A statue in a cemetery - heavily processed to imply a bleak mood.

So contemporary photographers can use the dreary aesthetics of black and white to evoke a mood of bleakness, despair, and ugliness. This can invoke a kind of anti-nostalgia, seeing not the good in the past, but rather its ugliness, and so black and white photography can be used in a mocking, disparaging fashion. It can also be used with fantasy, where the dull everyday world is seen in black and white, while the fantasy world is in color.

Some conversion hints

When shooting or converting an image to black and white, it is usually essential to adjust the image to give you the full range of tones, otherwise the image may look flat. Good global contrast is essential for a good black and white image. Adjusting curves of color images is a perilous activity: you can have color shifts, oversaturation, and you can send an image out of gamut; these are hardly concerns with black and white images.

Gothic Ornament 2, McMillan Hall, Washington University, in Saint Louis, Missouri, USA - black and white comparison
Gothic-style ornament at Washington University in Saint Louis. The straight-forward conversion on the left lacks contrast, which is corrected on the right.

The second important consideration is the conversion of colors to gray tones; there are many ways to do this in Photoshop, and there are some excellent plug-ins that improve the process. Even though you lose color information in your conversion, the various gray tones ought to imply different shades in the final image.

Equal-Lightness colors converted to grayscale
The color image on the left was converted in Photoshop using Image->Mode->Grayscale. This is obviously a fabricated image, since I specifically chose all colors to have the same luminance. Photoshop has many ways to convert to black and white, some may be better than others at implying changes in tone.

Generally speaking, you want to select the parts or combinations of each RGB channels which show good contrast between different objects. If the subject has stripes, you probably will want a conversion that shows the stripes in a good manner -- some conversions may not show the stripes at all. Also, for faces your conversion will have a drastic effect on showing or hiding blemishes and wrinkles. In Photoshop, the Black and White tool is very good for doing this conversion, however, a thorough knowledge of the channel structure of images and the laws of color mixing is very helpful for this.

Finally, you can add far more local contrast to a black and white image compared to color, while still making it look plausible. This final step has been used to great effect by the masters of the medium.