Camera Position 45 : Back in the Zone

In this second part of the Zone System overview, we look at a few more specifics about how the Zone System works and is implemented. Don’t forget to download the PDF handouts!

Click any image below for a larger view.

Black and White Film Curve Color Negative Curve

Color Transparency Curve Digital Capture Curve

8-Bit Image Data 16-Bit Data

Zone Scale & Histogram

The first part of the Zone System overview was in Camera Position #43

Download part 1 of the PDF handout (part 2 below)

16 thoughts on “Camera Position 45 : Back in the Zone”

  1. Jeff,

    many thanks for the insightful podcast. I really appreciate your time and energy that you put into sharing your knowledge. This is very valuable for me.

    Thanks,
    Kamran

  2. Jeff,

    I know you said that you wanted to get this technical stuff out of the way, so that you could concentrate on the creative stuff, but given that you have brought up the subject I should like to clear some confusion.

    You showed us a couple of tables indicating the number of pixel values belonging to each zone, and you also showed a picture of a PhotoShop histogram with the corresponding zones. But the two things don’t match up. For example, you told us that zone VII has 128 levels in the table, but only 25 levels in the histogram figure. Worse still, in the tables each zone has half the number of levels of the next higher zone, but in the histogram figure they all have about 25 levels. I can’t reconcile these two views of zones and levels.

    I’ve looked at the definition of gamma, but I can’t see that that actually solves the problem.

    For what it’s worth, I tend to believe the histogram view of zones, both in Photoshop and on my camera’s monitor, as it matches my experience of changing exposure. For example, if I expose a grey card correctly, then I get a spike in the middle of the histogram. If I then over-expose the grey card by 2 or 3 stops without blowing out the pixel values, hten each stop of over-exposure appears to move the spike in the histogram the same amount further to the right.

    Please help me understand.

  3. Chris;

    I think I can see where your confusion comes in.

    I think it’s because you’re comparing two different (“apples & oranges”) things.

    In terms of how values (lightness/darkness or “zones”) are spread out across the histogram, you can probably give an approximation that every 25 Photoshop (note the lower case “s”) levels is roughly equivalent to one “Zone” (doing the math of 256 levels divided by 11 zones comes out to 23.27 levels per zone, but we’re all friends here, so let’s round up to 25). So, we can divvy up that 256 levels into 11 Zones, understanding that few (if any) real-world photographs would have that sort of value relationship. It’s not likely that we’d have a photograph that would have an equal number of brightness values in every zone, right?

    So, this layout of values from dark to light across a scale is simply a method (an analogy, really) of how we can describe (talk about) dark, middle and light values in an image. In the film world, it’s really exactly like that, with a reasonably even distribution of information from dark to light and no changes in the amount of “data” that is used to describe dark and light… film is film is film and all we’re doing is describing a density (“light-blocking ability”) difference in that film’s emulsion.

    Digital is different.

    The issue I was trying to point out in the “Bit Depth” section of the podcast (and on page 7 of the PDF) is the way *computer data* deals with *describing* those levels and how many actual *bits* go into describing the lowest values of the scale versus how many *bits* go into describing the highest values of the scale.

    Because digital capture is a linear process, a peculiar part of that process is that it forces us to have less overall digital information (fewer *bits*) describing those lowest (darkest) Zones than we have describing the highest (lightest) Zones. So, the table above “The 8-Bit Problem” http://www.cameraposition.com/podcast/images/CP45/8Bit.png that shows 128 levels for Zone VII is really saying that of the *total* overall exposure for a photograph, half of the *data* (bits) go towards describing the brightest pixels in the photograph.

    So, in reality, even though we use a histogram to describe various levels of lightness up and down the scale, there is a *lot* more information (“data”, which I’m calling “Levels” because it’s easier for most people to understand, I think) on the light (right) end of the histogram scale than there is on the dark (left) end of the histogram scale.

    The point of those two tables (“The 8-Bit Problem” and “The High-Bit Solution”) is that by using higher bit depth, we can wind up describing more discrete units of value at the dark end of the scale by using more bits (which translates to levels of brightness) to describe those values.

    And, in your paragraph above that starts “For what it’s worth…” you have accurately described that changing exposure changes the *entire* value scale of the image, making every value lighter (when you expose more) or darker (when you expose less).

    Part of the deal here is that the Zone System is really just describing what *actually happens* in the exposure chain… it’s merely a way for us humans to wrap our minds around the more complex science of how light affects light-sensitive stuff.

    Hope that helps!

    -Jeff

  4. Thank you so much for these two very (Kamran said it best) “insightful” podcasts, charts and PDF files. They clear up a lot of confusion I had. Thanks also for taking the time to do them. You have added much to my knowledge on many levels through your Camera Position and History of Photography podcasts.

    Cheers!

    Pat in Juneau

  5. Pat;

    Thanks for your comments and thanks for listening (and downloading the PDFs!).

    I’m glad you (and others) are getting some useful information from the podcasts and really glad that you’re listening!

    -Jeff

  6. Jeff,

    You’ve mentioned you shoot 4×5 B&W negatives and then scan them. Does that reduce your response curve from the B&W to that of the digital? An since it’s reversal film will you then have more bits in the shadows instead of the highlights?

    Thanks
    Chris

  7. That’s a good question, Chris.

    The “short answer” to your question is “no… my response curve from a scan isn’t the same as shooting a digital file… it’s better than that.”

    In a sense, shooting large format film and scanning it gives me the best of both worlds.

    With 4×5 negatives, you have the ability to develop each sheet individually, which is a huge bonus. When I shoot my 4×5 negatives, I take meter readings of both important shadows and significant highlights in the scene. If the contrast range is too narrow (highlights don’t fall high enough on the scale for my chosen “placement” of the shadows) then I mark the film holder for increased development (what the Zone System would call “Plus” development). If, on the other had, the contrast range is too high (highlights fall too high on the scale to render detail) then I mark the film holder for decreased development (you guessed it… called “Minus” development in the Zone System). Probably about 75% to 80% of the negatives I shoot don’t need special development, but the rest need either more or less.

    After development, I then scan the negatives. I’m able to adjust the scanner’s input curves to capture a great deal more detail from dark-to-light than I was ever able to squeeze out of a negative in the traditional darkroom. I can make sure that dark detail is high enough up on the scale to get good separation of tones. Same deal at the high end.

    Post-scan manipulation in Photoshop gives me even more control over how bright the bright stuff is and how dark the dark stuff is.

    Of course, it helps that I’m scanning with an Imacon 848 film scanner, which has gobs of dynamic range (d-max of 4.8!) and the Imacon (now Hasselblad, since they bought Imacon) scan software is really the best I’ve ever seen in terms of precision. Even without the “unfair advantage” of a $15K scanner (belongs to my college, not to me) the combination of a custom-tailored negative and the post-scan controls of Photoshop makes for a really powerful combination.

    -Jeff

  8. Jeff,

    thank you for the great information you provide in both the cameraposition and the photo history podcast. Your talk about the zone system (and the additional pdf) really helped me to understand how the different media (digital sensor, negative and positive film) respond to light and how I have to deal with this in my photography.
    But I have the same problem that Chris mentioned in his first comment. Unfortunately, your answer didn’t make it clearer for me.
    I always tend to think about the zone system as a way to describe the distribution of tones in the final print, accomplished by specific methods that assures on each step of the whole process (capturing, development, printing) that the tones in the print would sit where I want them to be. If I think this way round, I can ask: If I want a zone five/middle gray in my final print, what would be the appropriate RGB-values in my digital file to get this gray on a well calibrated printer? I would say: I need a 127-127-127 RGB-color in my 8-bit file to get this zone five tone in the print (I actually tried this by filling a whole frame with this color in Photoshop). So 127-127-127 RGB would be the “digital representation” of zone five (or, to be more precise: 127-127-127 would represent the tone right in the middle of zone five). In conclusion, the values between 128 and 255 in an 8-bit file would be used to describe the tones between zone five and ten, the values between 0 and 126 would represent the zones zero to five. This would mean that the computer uses the same amount of data to describe the lower tones in the image than it uses to describe the higher tones (about 126 steps for everything below middle gray/zone five and about 126 steps for everything above middle gray).
    I guess you think the other way round, starting from the capturing and how the sensor responds to light. Your assumption seems to be (if I understood you correctly): doubling the amount of light that hits the sensor = bringing the tones in the image one zone up = doubling all RGB-values. So if the RGB-values 255-255-255 represent pure white or the upper end of zone ten, then dividing this by two (255/2 ~ 127) would give you a tone right in the middle between zone nine and ten. If this is true, the computer would in fact waste half of the possible steps between 0 and 255 (in an 8-bit file) to describe the very subtle differences inside zone ten, leaving not much space to differentiate the lower tones that are much more important for our images.
    The contradiction between both ways of thinking about RGB-values and zones is obvious: If I understood you right, you would say that RGB 127-127-127 represents a tone between zone nine and ten, I would say it represents middle gray/zone five. But I tend to believe that I missed the point somewhere in your argument. Can you help me out?

    (Sorry for the long post, but things are getting complicated…)
    Kai

  9. Hi, Kai;

    I’m not sure if I’m going to be able to clear up any confusion here, because It’s an “apples compared to oranges” situation.

    The “apple” is distribution of values on a scale from pure black to pure white, represented by histogram levels from 0 to 255 or translated to Zone System Zones from 0 to X.

    The “orange” is how camera sensors deal with the problem of turning analog information (light) into digital information (bits), where the realities of the math make it such that more information is available to describe detail at the high end of the scale than at the low end.

    I suppose an easier, more clear way for me to have stated the “math problem” (orange above) is to have used the word “Bits” instead of the “Levels” to describe how much information is available to describe changes in value at either end of the scale but I think the use of the word “Levels” makes more sense because it helps people understand that it’s not just data, it’s visual stuff, too.

    So, the “data” is different from the “values”.

    You’re right in saying that 127, 127, 127 is the middle of the scale, analogous to Zone V (I use 128, but that’s not significant here… the point is a “midpoint” between Level 0 and Level 255).

    So, if you take your camera and expose it to a single-toned object so that you make that object Zone V (meter it and do exactly what the meter says), you’ll get a value in your file that will be 128, 128, 128 (or thereabouts, owing to the vagaries of light meters, what texture the object had that would create variation in the tone you get, etc). This would be when shooting a JPEG image, so that you don’t have the “post-processing” issues of interpreting the RAW file.

    If you take another photograph of that same object, but expose one stop more (creating Zone VI in your picture of this single-toned object), you’ll get levels that are in the neighborhood of 180, 180, 180 (again, “or thereabouts” because of the vagaries mentioned above… it’s the “real world” so there’s no telling what the vagaries of subject will bring to the photograph, but the values will be somewhere around 180 when you look at them in Photoshop’s Info palette).

    So, raising your exposure by one stop (one Zone) does not double the value of the Levels by a factor of 2. Neither does lowering your exposure by one stop lower your Levels by a factor of 2. It’s a smaller change than that. If it weren’t, we would have very little value difference in the photographs we made, right? An image of a scene with a 3 stop range of reflectance would only be black, white and gray.

    I may have muddied the waters a bit by introducing the idea of how digital sensors divvy up the math values of the real world into bits that they can understand and display. It is exactly as I’ve described it in both the podcast and the PDF; there is more digital data being used to describe the high end of the scale than the low end. Note that it’s not about the VALUES (or Zones or Levels) but rather about the DATA being used to *describe* the values (or Zones or Levels).

    The basic idea here was to let people know that if they are in Photoshop, trying to eke out some detail in the low end of the scale in a picture that was a little underexposed, they are going to have trouble getting subtle detail because there just isn’t as much data to describe the lower values as there is to describe the higher values.

    In your comment, you wrote:
    “If this is true, the computer would in fact waste half of the possible steps between 0 and 255 (in an 8-bit file) to describe the very subtle differences inside zone ten, leaving not much space to differentiate the lower tones that are much more important for our images.”

    Though digital cameras aren’t ever going to be able to describe any variations in detail in Zone X (with digital cameras, Zone X is going to be pure white, as is Zone IX) your statement is true, really. At least it’s true when it comes to discussing the math that describes the picture’s tones. The DATA that’s describing the top of the scale is much greater than the DATA that describes the low end of the scale and the way the *math* works, is that half of the DATA goes to describe the brightest textured value in your image, half of what’s left over describes the next textured value down the scale, etc, etc.

    Again, “values” are different from “data”.

    Here’s the secret to the whole thing:

    Make photographs.

    Honestly, that’s it. If you make a lot of photographs and try out all these ideas (placing low zones where you want them to be, placing high zones where you want them to be) and then take a look at what you get, you’ll see how it works. It’s really one of those things that you can read and think about how it works all you want, but ultimately, it only makes sense when you begin to apply it.

    Not sure if that helps, but… it’s pretty much all I’ve got!

    -Jeff

  10. Quote:
    “It’s an ‘apples compared to oranges’ situation. … Make photographs … and then take a look at what you get.”
    Ok, I’ll make photographs of oranges and see if I’ll get apples out 😉

    No, sorry; thank you very much Jeff for your detailed answer. Maybe I’m getting closer to understand what is going on here.

    I think we agree that in the final file the same amount of “levels” is used to describe the dark tones as it is used to describe the bright tones. But the raw “data” that comes directly from the sensor might be in a different order with far less different levels in dark areas than in bright ones (if I got you right, this is your core argument). The “math” that you are talking about would then be to “map” the raw data from the sensor to the 256 levels of the final file, applying a kind of gamma curve that stretches the dark tones and compresses the bright ones.

    If I understood you right so far I only have one question left: I would assume (although I’m not sure about it) that the in-camera image processing does this “math” in the highest possible bit-depth that the sensor can deliver (mostly 12 or 14 bits I guess), converting the image to 8 bit afterwards. Then, shooting in raw mode would not be much of an advantage in this regard (of course, there are many other advantages, i.e. the two or so extra stops of dynamic range that you mentioned, etc.), because all the critical calculation is done in a higher bit depth anyhow.

    Sorry, it is really hard for me to get this into my head. Unfortunately the empirical testing that you suggested is not that easy, because in the dark areas, the image quality is also heavily reduced by camera noise (at least with my camera).

    Thanks again,
    Kai

  11. Hi, Kai;

    You wrote:
    “Unfortunately the empirical testing that you suggested is not that easy, because in the dark areas, the image quality is also heavily reduced by camera noise (at least with my camera).”

    Exactly! The noise in the dark areas is there in part because of the lack of data that’s there to describe variations in value. We (and the camera manufacturers) try to “amplify” that small signal (small amount of data) but when we do, we generate noise.

    You also wrote:
    “I would assume (although I’m not sure about it) that the in-camera image processing does this “math” in the highest possible bit-depth that the sensor can deliver (mostly 12 or 14 bits I guess), converting the image to 8 bit afterwards.”

    Also correct; the sensor always captures high-bit data, but when we shoot JPEG images (or, in some cameras, TIFF images) we are counting on the camera to reduce that high-bit (12- or 14-bits per pixel) data to 8-bit data.

    You *also* wrote:
    ” Then, shooting in raw mode would not be much of an advantage in this regard (of course, there are many other advantages, i.e. the two or so extra stops of dynamic range that you mentioned, etc.), because all the critical calculation is done in a higher bit depth anyhow.”

    Shooting in RAW mode has *huge* advantages because you are allowing yourself the added benefit of being able to access and use the 12- or 14-bit data in the computer. The camera never gets a chance to “dumb down” the high-bit capture to an 8-bit file, so you get the ability to access the extra information that is gained by capturing a file that is described by more bits. So, if you refer to the table in the “part 2” PDF file, you’ll see that with an 8-bit file, you have a great advantage in having more data at the bottom end of the scale than you would at the top end.

    When I’m having trouble “getting it” with some technical thing or another (and it happens *all* the time) I often find it helpful to find other sources for information that may explain it better than I can.

    Here are two:
    A great explanation of bit depth and dynamic range:
    http://www.cambridgeincolour.com/tutorials/dynamic-range.htm

    A wonderful PDF file from Apple about digital photography in general that has a *great* explanation of exposure, dynamic range and bit depth:
    http://manuals.info.apple.com/en/Aperture_Photography_Fundamentals.pdf

    The Apple PDF is 47 pages long. It’s really designed as a support document for their “Aperture” software, but it’s just an outstanding general explanation of digital photography and how it works.

    Thanks!

    -Jeff

  12. If you measure of the linearity of a digital camera, you’ll probably find that it has a curve (dog leg) in the darkest shadow region as an interaction of the noise floor of the sensor and the analogue to digital converter, whereas in your diagrams, you’re getting an idealized straight line.

    I don’t follow your 8-bit problem, because the JPEG will be recorded gamma-encoded with a gamma value of ~2.4. If the 8-bit data was linear (ie gamma = 1.0) then yes, we’d have the problem you state, but this is not the case. The 12bit RAW data from the camera is linear, and hence the numbers of code values per stop is distributed in the manner you describe. The 8-bit JPEG is made from this linear data via a tone mapping curve which re-distributes the code values per-stop much more evenly. Yes, there’s more room to work in the RAW because processing (like white balance) is not burned in, and neither is it compressed. There is extra headroom, yes, but mostly because the tone curve that maps 12bit to 8bit is “conservative” to avoid any potentiality for colour shifts in highlights, whereas in RAW development you can choose to allow more highlight data if the possible colour innacuracies don’t bother you (or you’re shooting for B&W) or just don’t appear.

    I’d also suggest that a 7 stop dynamic range for modern DSLR’s is quite conservative, and indeed all depends on your noise tolerence, as it’s most likely to be noise limited by the dog leg of the curve mentioned earlier.

  13. Graeme;

    Thanks for your comments.

    Rather than get into a “how many pixels can dance on the head of a pin” discussion here, I’m going to refer you and others to the the links I recommended in my previous comment, as they do a better job than I can at explaining the vagaries of how digital exposure works and how digital camera sensors function. I think that Apple PDF is really pretty outstanding at explaining how it all works in a fairly small nutshell.

    I’d also refer folks to “Real World Digital Photography”
    http://www.peachpit.com/bookstore/product.asp?isbn=0321223721&rl=1

    Understand that my point in these couple of Zone System podcasts was not to debate the fine points of digital photography’s technology, but rather to help photographers understand how to better control their own creative expression in the medium.

    Thanks!

    -Jeff

  14. Jeff,

    Awesome website and podcasts. Thanks for doing this. Although your “focus” is on the creative side of photography, I find your explanations of the technical aspects better than any I have found elsewhere.

    I have question, though. You said:

    “So, the table above “The 8-Bit Problem” http://www.cameraposition.com/podcast/images/CP45/8Bit.png that shows 128 levels for Zone VII is really saying that of the *total* overall exposure for a photograph, half of the *data* (bits) go towards describing the brightest pixels in the photograph.”

    I don’t think saying that half of the *bits* go into the highest zone is the same as saying half of the *levels* go into it.

    Consider: One channel of an 8-bit JPG image would use values from 128 to 255 (10000000 to 11111111) to describe the highest zone, and would use values from 8 to 15 (00001000 to 00001111) to describe the tones that are 4 zones lower. So the highest zone is using 1/2 of the available levels, but 7 bits are required to do it. The tones 4 zones lower are using only 8 levels, but 4 bits (half of the total bits) are required to produce those 8 levels.

    Of course, this is assuming that the sensor and the A/D converter are linear and proportional throughout this range. Have I got this right, or am I missing something?

    David

  15. Hi, David;

    Thanks so much for your comments; I’m glad you’ve found the podcasts useful and interesting.

    Much as I’d like to, I can’t take the time to respond to all the comments about the technical parts of digital capture as it relates to the zone system.

    So, I’ll refer you to the links that I have above for other resources, encourage you to check those out and also encourage you to go make some photographs in the “real world” of reflective objects. Meanwhile, I’ve got some cool photographs to talk about on Camera Position.

    Thanks!

    -Jeff

Comments are closed.