Flipped – doing color negative inversions manually

One of the frustrating bits of digitizing color negatives is how to get the colors to come out right. At least, that’s how many people feel, and I can relate. There are many ways of doing this, and there’s also software dedicated to this purpose. Since scanning is a bit of sideshow for me, I make do with just the curves tool in GIMP.

Prefer to talk you through it with some illustrations instead of having to read this? There’s a YouTube video as well!

Let met start with this: I’ve not tried the dedicated tools for color negative inversions. The results I see from those tools are mostly really good. I imagine it’s a simpler workflow than the way I do it, and if you’re looking for something that’s quick and easy, I’d look in that direction. There’s no real need to reinvent the wheel.

I do this manually because I don’t have to deal with negative inversions a whole lot. When I do, it’s often because I want to analyze some aspect of the image; for instance, a comparison between different types of film. In those situations, I find it helpful to be able to gauge how the color curves cross over, for instance. By using a manual curves adjustment to do the inversion, I get more insight into the structure of the image than if I were to unleash an automated tool onto them.

It all starts with a scan of a negative, or a series of negatives. It doesn’t really matter where the scan comes from, as long as none of the color channels are clipped. Since color negative film is a low-contrast medium, this risk is usually negligible.

Preferably, the scan has as little color correction applied to it as possible. This is a bit of a red herring, since a scanner will always interpret an image in a certain way and if you were to scan the same negatives on two different types of scanners, there will always be differences in the output. In general, the scanner setting intended to scan slides/color positives will work the best. Here’s for instance the settings I use on my old Epson 4990:

Note that the Color dialog is set to ‘No Color Correction’. Also note that I’m scanning in 24-bit color mode here; I would recommend scanning in 48-bit mode instead because it will significantly reduce the risk of posterization in the final image.

However, since I scanned directly into GIMP in this example and GIMP’s rudimentary scanning interface only accepts 24-bit color input, I’ve used it here. I’m also scanning at a lowly 1200dpi, which for 35mm frames is a little low, but again, for sake of the example, it’s plenty good enough.

Note the ‘Auto Exposure Level’ slider in the illustration above? That’s part of the problem I mentioned that scanners have a mind of their own when it comes to interpreting the original. By means of necessity, certain assumptions must be made in the design of the scanner hardware (how linear is the response of the CCD sensor sites, how linear is the analog gain circuitry and how much gain is needed anyway) and its software (how should the data acquired from the hardware be interpreted). There’s no escape from this, although software like VueScan and Silverfast has far more manual controls that should allow you to fix some of these settings so that they are at least consistent across individual scans.

Here’s an example negative strip scanned with the settings above:

Looking at the histogram, we see that essentially all color information is bunched up in the middle, and that the red, green and blue channels are shifted in relation to each other:

Notice the distinct peaks of blue, green and red, in that order from left to right. This is the orange mask, and logically, it is considerably shifted to red and balanced away from blue. In other words, it’s…orange. This, by the way, is ECN2 film and the mask of regular C41 film looks slightly different, but the difference is really only marginal.

If I were to characterize the image information by means of the histogram above, it would be an essentially red image with little green and almost no blue data, and all three channels have exceedingly low contrast. This is what makes color negative inversion tricky, because apart from the inversion itself, we need to apply a massive contrast boost as well as a color shift to make the channels overlap.

This combination of a contrast boost and a color shift is inherently challenging, since every color adjustment is at the same time amplified massively. Getting the image to balance naturally is a bit like balancing on a very sharp mountain ridge with deep ravines of atrocious color rendition on both sides.

I can then use the information in the histogram to invert and very roughly color correct the image by handling each color channel separately:

Here, the first adjustment for the red channel is made. Note the image information that’s bunched in the center of the histogram. I simply lop off the orange mask, which is the sharp peak that makes up the right-hand side of the image data. I set the white point to the point where the histogram has tapered off. Note also that I’ve inverted the curve, so I’m doing the inversion and initial color balancing at the same time here.

You can see in the negative strip on the left what the effect of this adjustment is. It’s not looking very pretty, yet, but that’s because I’ve only dealt with the red channel. So let’s do green and blue as well:

Green inversion and adjustment, on top of the existing red adjustment.
Blue inversion and adjustment, on top of the red and green ones.

Notice how the image now looks fairly neutral. If I’m critical, I’d say that it’s biased towards yellow and slightly towards green as well. What I do at this point, is cycle through the channels and move the top and the bottom point of the adjustment curve to the left and to the right, while observing the actual image. By doing this, I can visually estimate at what point a certain color cast in either highlights or shadows emerges, and intuitively seek the point where the colors are the most neutral – or at least where they match my expectations for the image.

By doing this, I can come up with the following adjustment curve for this image:

To me, this looks pretty close to how I experienced this scene, and fairly neutral in terms of color balance. There is no severe color cast and both shadows and highlights (insofar present in this image) look neutral.

Note that the judgement of these adjustments is entirely subjective. This means that color rendition of the monitor and lighting conditions matter; I’m currently doing this as the sun shines into my studio, which isn’t ideal for judging color. And I’m using a consumer-grade monitor, although I have calibrated it. But the most important factor is really my own color vision, which is personal, and everything but absolute.

If you want to reduce this subjective aspect, the best approach would be to photograph a color checker on each roll of film and use that as a reference for determining the color correction curve.

In the examples above, I’ve used one frame to base my adjustments on. I admit that it’s a frame that’s not the most fortunate choice for this purpose. In general, a good frame to start with is one that contains a neutral color reference – an overcast sky is nice, or some grey/neutral clouds. Skin tones can work, too, but tend to be confusing in my experience. Concrete is a good alternative to clouds as it’s mostly fairly neutral. A grey card is of course also great, but be sure to keep the real grey card at hand while doing the color correction, so you can match your screen image to the real thing (with clouds, this is a little tricky).

A good reference frame also has a fairly high contrast and a good exposure; i.e. it’s neither under- or overexposed, and contains both pronounced shadow areas and highlight regions. This helps to get the contrast right. This is one more thing that’s essential to note: in the curve adjustments above, I deliberately cut off the major part of the histogram for each color curve. Evidently, I only want to cut off the bits that don’t hold any image data – otherwise I’ll blow out highlights or introduce featureless shadows.

On the highlight side, the histogram shows the cut-off point fairly well, but a good reference image helps as a verification that no highlights are actually getting blown out. This is especially relevant when scanning frames where highlight areas are scarce and thus only create a small peak on the histogram, making them easy to overlook.

On the shadow side, things are much trickier, because there’s a sort of valley between the image data (particularly, its shadows) that transitions into the featureless orange mask. Where you cut off the color curve in this region takes some guessing, and it’s also dependent on how the images are exposed. If the negatives are slightly overexposed, it’s generally easier to separate the shadows from the orange mask and determine a good cut-off point for the color curves. With underexposed images, it’s more of an arbitrary point and you’ll have to just lop off some of the shadow information to prevent dark areas from getting color casts and becoming excessively grainy, with lots of chroma noise.

Highlight and shadow cut-off points for the green curve (stretched horizontally to enhance visibility). Notice how the histogram tapers off to the horizontal axis on the left. This is where the highlights transition into pure white, and it’s a good spot to place the upper point of the curve. On the right-hand side, there’s a very strong peak, which represents the orange mask. To the left of this peak, there’s a valley that then slopes upwards into the majority of the image area, transitioning through the deep shadows and then up to the midtones. Where you place the cutoff point in this area is more subjective.

Because of the somewhat arbitrary cut-off points, especially in the shadows, it’s important to cycle back and forth between the color channels. If you chance for instance the green curve’s shadow cut-off point (maybe because you decide you’ve lost too much shadow detail and want to recover some of it), a color cast in the shadows will appear. You then have to go back to the red and the green channels to fix this color cast.

In the examples above, I’ve shown only straight line adjustments – they’re not actually curve adjustments at all. I find that such straight-line adjustments form a very good starting point and often are actually good enough to my taste. I also find that well-exposed and well-developed film tends to come out okay with these kinds of adjustments. Heavily expired film and film that has been processed under less than ideal conditions sometimes benefits from applying a slight bulge or bump to a part of the curve, to selectively fix a color issue that exists in only a small part of the density range.

What came to be of the strip I started out with? By basing my adjustments on a single frame, I ended up arriving at a very decent (I think) color balance for the entire strip:

The work was of course made easier by shooting a film with a not too outlandish color rendition; e.g. the more mellow Vision3 films and Portra are generally easier to balance out than e.g. Ektar, with its inherently high saturation. It also helps that the frames above were all shot under similar overcast conditions, which makes them fairly muted and neutrally colored. In selecting a reference image to start working on, you could take these sorts of considerations into account.

And if you like the adjustment, you can of course save the curve in your editing software to use it as a starting point for later scans. I generally don’t do this because of the auto-exposure issues mentioned earlier, and because I generally scan a variety of film, shot and processed under a variety of conditions, so a case-by-case approach is OK for me. But if you want to bring consistency in your work, saving a successful default curve would be a logical step.

8 thoughts on “Flipped – doing color negative inversions manually”

  1. Recently I „discovered“ another desirable step in this alternative scanning process – the need of conversion from color space of the negative to the color space of the final image file. The scanner software with activated “negative film scan” option does it for you internally, but if one scans a color negative with “slide transparency” option or with a DSLR, he must also do the color space reconversion to get right colors.
    I did color negative scans the similar way like you, but anyway I often ended up with irreparable hue shifts and saturation errors, especially in red areas.
    Basically if the color neg is scanned in “natural colors” manner (i.e. in mode of eye-observable object), the colors will be off, because the color negative stores color information in different color representation. The colors are in the color neg probably stored more saturated than real-world colors, but maybe not with the same saturation across the channels, and dye hues don’t match the sRGB channel hues.
    The color space conversions are mathematically done using channel mixing (i.e. adding or subtracting input RGB channel values to get the values of output RGB channels), so I set up rough channel mixing ratios for my negative scans I do using DSLR. I was able to rectify hue shifts in reds and skin tones issues, which was my biggest issue… For best results I will need some test negative with color chart, but so far I didn’t go so far…

    1. That’s an interesting take on things, Ivan, but I’m afraid it doesn’t make much sense from a viewpoint of how ICC profiles work. The statement that “dye hues don’t match the sRGB channel hues” is not a meaningful one.

      There can be a problem if you acquire color image data from the scanner in e.g. Adobe RGB space, do the inversion & color balancing in that space, and then view the image on a non-color managed device/app that interprets it as e.g. sRGB. This is no different from any other ICC conflict and the issues we’ve dealt with in e.g. web publishing for years. It’s not inherent to scanning, let alone to how color film works or how dyes are made.

      From my end, I’ve not run into any issues with this. When I scan in the way described in the blog, I scan into default sRGB space and do the editing there. If I then export e.g. a jpeg for web viewing, it’s already in sRGB (since all editing was done in that space anyway) and it’ll render just fine on any device, since sRGB is the de facto standard. If you end up with image data in a different profile, you may (will) have to convert to sRGB at some point for web publishing. In practical terms it does not matter very much at what stage in the process you do this.

      1. RGB primaries according to sRGB colorspace have their defined hues, see horseshoe diagram at https://en.wikipedia.org/wiki/SRGB. This is what I mean.
        Color negative has its own colorspace, its primaries are in different places in the color diagram. You cannot simply slap its primaries onto sRGB primaries. You must recalculate them to get correct color rendition on sRGB device.

        1. This is moot once you do the color balancing of your scan. No additional profile conversion is required.

          I’m afraid you’re confusing a bunch of things: spectral sensitivity of color emulsions, dye absorption peaks and display ICC profiles. There’s a host of other factors that also come into play in the long image chain that starts with a real-world scene and goes through color film, a scan and ultimately ends up on a computer monitor (or printing paper). ICC-profiling plays a role in part of that process insofar as it happens in digital space and involves input and output devices that do not default to the same space (usually sRGB). In that sense, profile conversions can be necessary, e.g. to (mostly) correctly render colors on several output devices regardless of the characteristics of the device the image data were initially captured or edited on.

          The direct relation between ICC profiles and dye clouds in color negative film that you’re implying with your comment simply doesn’t exist.

          1. A negative is an “imaging device” on its own, no matter if it’s ICC profiled or not. Additionally I am not talking about ICC profiling at all. But once the real-world colours are transfered into and displayed in RGB primaries, they always end up in their defined colorspace, no matter if they are represented in digital or analogue realm.
            Moreover you cannot percieve a negative as image, but rather as “image transfer device” with its own “coding scheme”, and optical image created on the negative has very little to do with original scene.
            During the film exposure, the colours of the scene are split through the film’s spectral sensitivity curves (which are similar to the spectral sensitivity curves of the cones in human eye retina) into three luminosity values. The three dyes serve merely to carry these 3 values.
            Problem is that your approach, i.e. to scan the negative as real-world eye-perceivable object, doesn’t extract the density values of the dyes, but it takes their hues as real-world colours. Which is wrong, because the hues in the negative are no real hues of the primaries of the image the negative carries. They only serve as channels, modulating the luminance values of the primaries.

          2. I’m sorry Ivan, I can’t agree with your reasoning for two main reasons: (1) the argumentation as such is inconsistent and (2) the practical results prove the opposite. As to (1), you mention that by scanning, I would somehow interpret the negative as a real-world image and not a codified representation. That’s not correct, since the scanner is really an RGB sensing device that outputs a signal broken down into these three channels. I then take these raw data and re-interpret them into a visual image. This is analogous to what happens when enlarging onto color paper using narrow-band LEDs, as I do on an almost daily basis. As to (2), it’s very hard to accept your criticism in the face of perfectly naturally-looking images resulting from the inversion & color balancing process. These images match what I get from optical prints, although of course both digital scans and optical prints are quite flexible, so there’s plenty of opportunity to tailor either to a specific end result.

            So I’m afraid that we’ll have to decide to ‘agree to disagree’ on this topic. Thanks for sharing your thoughts; they’re certainly stimulating and interesting, but in this particular case, I don’t your concerns are consistent with the realities of how color is encoded and then re-constructed in a film-based process.

  2. I am about to embark on digitizing nearly 40 years of color negatives using my digital camera. Naturally, I find what you write to be very interesting.

    I would think, however, that for a given film type processed by the same lab (lets assume a reliable one), it should be possible to do the reversal “by-the-numbers”. I.e. there is a single set of RGB curves which will yield a result “identical” to that which would have been obtained with color positive film and the same scene. This of course does not address further interpretive issues, nor correction due to bad exposure or bad light. But it would be a good starting point for digitizing. What do you think?

    Secondly, I think your method of “lopping off” the orange mask artifacts seems problematic to me, since there is probably valid image information in those frequencies. I think this is perhaps the reason some sources suggest illuminating the negative with light which is complimentary to the orange mask.

    I think a similar issue of “lopping-off” information might be occurring by anchoring the top part of your curve away from the upper left hand corner.

    What do you think?

    1. Hi George, thanks for sharing your thoughts. I agree that in principle, a given film processed within relatively tight controls could be corrected with the same base curve, after which the interpretative issues (as you aptly call them) could be settled on a per-image basis. This is ignoring potential issues w.r.t. deterioration; dyes fade and it’s of course possible that some strips end up being more affected than others.

      As to lopping off the mask: no, this is not problematic. The image-wise mask is part of the dye image. If you do a white-light recording, provided that the entire density scale is within the dynamic range of the digital capture, there’s nothing ‘hiding’ in the mask or anything. The main risk with cutting off the curves at some point is that there’s image information that doesn’t show up on the histogram, so you end up overlooking it. However, this is usually not really an issue, since their absence from the histogram implies that they’re tiny little areas – typically things like small specular highlights, which generally are fine to blow out anyway. If you feel this is a problem, you could work around it (a bit) by not using a linear curve, but an S-shaped curve that starts all the way at the top left and ends at the bottom right of the entire capture range. Then adjust the middle part to get the desired color balance. This effectively entails a compensating effect in both shadow and highlight areas. I personally don’t think it’s of much merit; try it out and see how much difference it makes.

      Give it a try and experiment a bit; see how you feel about the matter after having played around a bit with curves. I’d be glad to hear about your experiences.

Leave a Reply

Your email address will not be published. Required fields are marked *