Many art instructors say that the most important element to get right in a painting is value. ‘Value’ in this context refers to light vs. dark. A grayscale photo is usually readable even though it lacks color. Value is what tells us the most about the masses in the scene and how they relate to each other.
It is a challenge, for the realist painter, to defeat the brain’s interpretation of value in order to paint the scene more like it “really” is. This is necessary so that when the viewer of the artwork unconsciously applies their own, similar interpretations, it will look “right” to them. The problem for the painter is that the brain’s distortions of value are sneaky and imperceptible. They are embedded in the firmware.
Value relationships and local contrast in photography
The first photo below shows how my camera exposed this image of the Seattle waterfront. It metered for the sky in the background, and as a result, the foreground buildings are too dark to read. But this does faithfully depict the value relationship between the sky and the buildings — the sky is much brighter.
In the next photo, I boosted the exposure in post-processing to make the buildings discernible. This washed out the sky, which is unavoidable as long as we want to preserve the absolute value relationship between the sky and the buildings, because a screen (or a paper print) can’t express as many levels of brightness as our eyes can perceive.
Now, when we view the scene in person, we are able to see the detail in both the light areas and the dark areas, and we mesh them in our mind’s eye without any of the “washing out.” The effect is similar to the next image, which was produced by a technique called tone mapping. Tone mapping is mainly what people are referring to when they talk about HDR (High Dynamic Range) photo processing. This version of the photo makes all of the information readable in both the light and the dark areas of the scene, but it sacrifices contrast, making the image relatively flat and monotonous.
In these photo edits so far, we have been manipulating global contrast. One way to combat the flatness of the tone mapped image is to enhance local contrast, by exaggerating color differences at the boundaries between colors. When it works, it is effective at adding drama and intensity. When it’s overdone, it creates artificial-looking halos. In the below extreme example, look at where the buildings and sky meet, or at the contour of the utility pole. You can see how the edges of the street sign were darkened and this was faded toward the original color of the sign visible at its center. Normally, when this tool is used, it is subtle enough that you don’t consciously notice.
All these same tricks are available to a painter. We could think of painting from life as a process of mental tone mapping and local contrast optimization. Where we can get into trouble is if we fail to keep the value relationships consistent within the image. For example, in all of the previous edits, the buildings and street sign remain darker than the sky, so you can still read how the space is laid out. But if we were to make the buildings lighter than the sky, say, the viewer would become confused about what is where in the scene.
It is difficult to mess up global value relationships in a photo, but easy to do it in a painting.
Of course this this photo edit isn’t a convincing example of the kind of error you might make in a painting. I’ll share some more plausible examples below.
Local color vs. reflected color
Our visual system is unlike a camera in that our brains are trying to construct for us a model of what we are looking at. In the process of doing that, it distorts our value perceptions.
In the below, pretty famous optical illusion, the squares labeled “A” and “B” are the same shade of gray. If you don’t believe it, take a screenshot and use photo processing software to analyze the pixels. I am familiar with this image and I still don’t believe it when I look. Without permission, our brains are reverse engineering this image and trying to tell us about the surfaces and the light that generated this rendering.
The local color of square B is the same as one of the light gray squares next to square A. Our brain tries to factor out the shadow cast by the cylinder, and tells us B compares to A the same way one of A’s adjacent squares compares to it. The brain only gets away with this because A and B are separated from each other. If you cut up the image and put them next to each other, it breaks the illusion.
What this means is that the brain is trying to be helpful, but at the end of the day it lies to us.
Let’s look at two examples of how can the brain’s attempts to be helpful can impede realistic painting. Both types of error can be seen in a study I did of a plaster cast. The image below on the left was provided as a reference photo for a class I took at Gage Academy of Art. I think these lips are from Michelangelo’s David. On the right is my painting of the cast.
What’s wrong with this study? Problem 1: The light-valued strip at the crease between the lips is too bright. Problem 2: The left side and the right side of the cast are shown as about the same value, when the right side is actually in shadow and should be at least 25% darker. Squinting at these images may help to show the differences.
Problem #1: Overestimating light areas in shadow
First let’s look at the shadow between the lips. Here is a close-up.
It looks like shadow has a light part and a dark part, right? The effect is very strong for me. But one of the things our brains do is amp up local contrast. This makes edges stand out, which helps us perceive form. So some of the effect in this shadow is influenced by how dark and how light the surrounding areas are. What happens to the shadow when we paint over the surrounding areas with a neutral color?
Removing the local contrast all but eliminates the impression that there is a light part and a dark part inside the shadow. I swear I didn’t edit the photo other than adding the gray stripes (try it yourself). What we are seeing is a halo, just like the over-amped HDR image. If there is any reflected light in that lip shadow, it is very subtle, to the point where you would probably be better off painting the shadow as a solid color and letting the viewer’s brain inject the halo, instead of painting it as too exaggerated and then having it appear fake.
Here is another example of this error. I painted the below sphere as part of Sarah Sedwick’s oil painting program as a six-value study. The sphere’s shadow is faintly illuminated by light that is reflected from the surface the sphere is resting on. However, I painted it lighter than I should have because it’s juxpaposed with the occlusion shadow, which is the darkest shadow just underneath the object. Compared to that shadow, the reflected light looked so bright that I painted it as bright as the light side of the sphere. But it’s still part of the shadow and needs to be treated in that context.
Problem #2: Substituting local color for reflected color
The next issue is that the brain normalizes the value on each side of the cast since it is clearly a symmetrical object. The information that our brain presents to us is a 3D model of the cast that we think the reference image is a photograph of. Then we may naïvely paint local color, out of context of the scene with its light and shadow.
Below I have analyzed the photo of the cast into eight value levels. Notice that on the chin area, value 3 on the left and value 5 on the right are two steps apart on an eight-step scale, yet in my painting they were given the same value.
Here is the naïve cast painting next to a version with both problems corrected. There still are issues, like with the lower lip’s shadow on the chin, but you can see the improvement.
For a bonus example, I’m sharing a still life of some garlic and a jar of beans (yes, I refused to paint all of those stupid beans in the pile). Ignoring the differences in the drawing, the leftmost clove of the head of garlic is too light. If you squint and direct your attention to the area where the red arrow points, you can see that the right hand face and bottom edge of that clove are not dark enough, so they fail to convey the fact that they are facing away from the light source. I made this mistake because I was too caught up in painting the clove in isolation and making the little paper flakes look right. After I finished and stepped back, I saw that this component didn’t quite have the right value relationship to the scene overall.
Reverse engineering the reverse engineering
Why do I think that capturing accurate value relationships is among the hardest things, if not the hardest thing about realistic painting? According to the documentary Tim’s Vermeer, there’s no visual equivalent of perfect pitch, because perceiving absolute global value is not physiologically possible. The claim is that there is a bottleneck at the optic nerve, which has low enough bandwidth that the signals have to be compressed into relative differences vs. surrounding colors. Which would mean that human vision is not like a sensor dump from a digital camera. Global value data is already lost by the time the brain gets hold of the signal.
So essentially, there may be no way to “get at” the raw perceptions before they are interpreted by the brain. The only solution is to train ourselves how to guess what was originally there based on the processed version we have access to. This takes practice, and for example painting teacher Mark Carder recommends that beginner students use a device called a color checker on which they can put a blob of paint and then hold it up to the scene to compare.
Realizing that the scene we perceive is different than the scene that is “actually there” seems like a pretty apt metaphor. We are prone to exaggerate and assume in pretty much all areas of life, as far as I can tell. The trouble is that we get caught up in an area of detail and lose track of how that part relates to the whole.