Peaks or plateaus – Light sources for DSLR scanning color negatives

Many people like to shoot film, but enjoy their images in the digital domain and the possibilities it warrants for post-processing, printing, etc. This means that the film images need to be scanned, and using a DSLR to do this has become very popular in recent years. But it does bring up some questions – mainly: what’s the best way to do this? And in particular, if you photograph a backlit negative, what kind of backlight would be preferable? I’m going to reflect on this for a bit on a theoretical basis.

So let me preface by saying that I don’t have the ultimate answer. I have half a mind of actually setting up an experiment, but there’s so many of those that I’d like to do, so no promises.

Also, I’m going to limit my reflection to C41 film, and maybe ECN2 – so color negative. I expect that most of the argument will also go for E6/slide film (and probably K14 etc.), but I’ve honestly never put much thought into that.

What I’ll try to go in mostly is the question how relevant a continuous light source and filtered derivatives of such a continuous spectrum are for digitizing color negative films. I think this is the pertinent question, because the importance of a continuous light source is often brought up when it comes to DSLR scanning, but I’ve never actually seen it backed up by a theoretical argumentation. And the lack of that explanation makes me somewhat weary/skeptical – it just happens to be my nature. 

One more caveat: I’ll assume in the following that it’s somehow advantageous to be able to adjust the color of the light used to digitize film with a DSLR. I actually have severe doubts about the relevance of this. I think that the output of any decent digital camera will be just fine if you photograph a negative with plain, white light (of any reasonable color temperature), provided recording is done with a high bit depth (nominally 16) and none of the color curves are being truncated either on the dark or (especially) the light side. Post processing consists of inverting and balancing the channels, and the results will come out fine provided the negative is properly exposed and processed to begin with. I’ve done exactly this using a scanner instead of a DSLR (I doubt the difference is all that relevant in this context) and it really works just fine.

So all considered, I’m setting up mostly for an argument for sake of the argument, really. If I were to fashion a DSLR scanning setup, I would just pick any old white LED panel, film holder and camera stand and have at it. No need in my mind to complicate things any further.

There is one conceivable theoretical argument for using a filtered light source: if you color balance the light source, you can (at least theoretically) get each color layer to overlap pretty much perfectly in the histogram, and that in turn will allow you to ‘expose to the right‘ to optimize S/N ratio. Since the color layers in a typical color negative image are quite different in terms of relative densities, this approach makes sense from a theoretical viewpoint. See this digital mockup I made:

Digital mockup of the concept of using a filtered light source so expose-to-the-right S/N optimization can be employed in DSLR scanning. Left: ECN2 color negative scanned as positive. Note how the separate R, G and B curves occupy distinct parts of the histogram; they do not neatly overlap. Middle image: using a curves adjustment, I simulated a color-adjustable light source that allows the three color curves to occupy the same area of the histogram (they overlap). Right image is the inverted version of the center image just to illustrate ‘how it would come out’ as a positive; note that the colors are fairly close to neutral – but could do with some tweaking and of course a contrast adjustment. Note that in the middle image, I also chose to occupy the full histogram; in a typical DSLR scanning situation, a color negative image would occupy just a tiny part of the entire histogram, and that area can then be shifted to the right by adjusting exposure on the DSLR.

So if we assume it’s a good thing to have a filterable light source, along the lines of the argument above, what kind of approach would be preferable?

An obvious choice would be between ‘old tech’ and ‘new tech’, which for all intents and purposes is either a dichroic filtered light source scavenged from a color enlarger (old tech), or an RGB LED setup with individually controllable channels (new tech).

It seems that part of the rationale behind using a dichroic color head is an intuitive one: dichroic color heads have always been associated with printing color negative (and in the days Ilfochrome: color positive) film. They must be a natural match, or even a match made in heaven, right? After all, we’ve been working with that combination for decades, so there must be something to it.

I personally think this match is more a case of opportunistic co-evolution than of the best possible match from a technical viewpoint. To expose a color negative (e.g. C41) image onto a color negative paper (e.g. RA4), you need to have some control over the relative exposure of the three different color images. There are two ways to do this color filtration: additive or subtractive. What they have in common, at least in ‘old tech’ as it was/is used in most color darkrooms or even analog minilab printers, is that they use a broad-spectrum light source (or several, as in some additive machines) and three band-pass (additive) or band-gap (subtractive) filters. 

Ok, hold on for a little theory that you’re probably familiar with, but for those who aren’t maybe it helps, or it may act as a refresher.

Band-pass or band-gap filters are called thus because they pass or block a fairly broad band of wavelengths. E.g. a red band-pass filter may pass anything between let’s say 640nm and 700nm or so, a green band-pass filter may pass anything between 510nm and 600nm and blue may pass 490 up to around 500nm. These are actually he approximate numbers of the Wratten #70, #99 and #98, respectively. Furthermore, the edges of those bands can be quite steep, but don’t necessarily have to be.

Kodak Wratten optical densities. Ignore the vertical blue, green and red bands; they’re from another article. It’s the curves that matter here.

The dichroic C, M and Y band-gap filters in a typical color head have similar cutoff frequencies, but the transmission is inverted compared to RGB filters. I.e. a magenta dichroic filter will pass anything from the lower UV-A limits (350nm or so) up to about 500nm, and anything above about 600nm, blocking the band between 500nm and 600nm. Btw, there’s a very nice picture on this in a Thor Labs pdf I found. Note that in the image below, what’s ‘high’ in the graph represent light that’s being passed. A ‘low’ line denotes light being blocked. So that’s why I refer to the magenta filter as  band-gap filter: it creates a ‘gap’ in the light spectrum by blocking light between 500nm and 600nm.

Example filter spectra, taken from here.

Keep in mind we’re talking about light sources here, and how to manipulate the color of the light we shine through a piece of film.

The contemporary alternative to these filters, with inherently broad transmission spectra, would be lasers (chemical) or semiconductor light sources (LEDs or semicon lasers) with narrow peaks. For instance, the emission spectrum of a red LED might look something like this: 

Note the difference between this narrow peak, and the broad spectrum that’s passed by a red filter. Note that both will look pretty ‘red’ to our eyes (although you will see a difference if you compare them directly). They’re fundamentally different.

You have to realize that such narrow peak light sources, or even filters with a very narrow band-gap or band-pass behavior, have historically been difficult to make and hence were not really available to the mass market. So when color negative materials rose to popularity in the 1960s or so, the logical choice would be to use not-so-perfect filters, and preferably dichroic ones because those also allowed to use just a single light source without very complicated mirror optics to split out three different beams (open up an additive Philips PCS color head, and compare it to any Durst, Leitz etc. dichroic head and you’ll understand). For practical and economic reasons, we had to rely on pretty broad (i.e., non-selective) spectra for color filtration.

Add to this the nature of chromogenic dyes, which as you might remember is also not as clear-cut as they might have ideally been from a theoretical viewpoint. The ideal chromogenic dye blocks an entire color (a fairly broad spectrum) with great efficacy, and leaves all the rest of the spectrum alone. In reality, things don’t quite pan out that way, of course. Maybe you’re familiar with these images from a website that explains why there’s an orange mask on color negative film:

Example chromogenic color negative dye transmission spectra. Taken from this excellent article.

The ideal transmission curves for these dyes would look more like the filter behavior curves from the Thor Labs pdf: they would pretty much look like buckets, with a flat bottom smack at the horizontal axis, very steep walls (dead vertical would be nice), and a stable plateau right at 100%. In reality, it looks more like a Tuscan landscape (save for the cypresses). 

Put into words, the problem is that the dyes, particularly cyan and magenta, also absorb colors in adjacent areas. So the cyan dye also absorbs some green light (and thus plays a role in forming the magenta layer in an RA4 color print), and the magenta dye also creates some image information in the blue area (contributing to yellow image formation).

This problem is to an extent overcome in color negatives by the orange mask. This mask is not just a constant orange (or in reality, brown) image, but part of it is actually image-dependent. The cyan and magenta images on a color negative film consist in part of a magenta resp. yellow inverted dye image that’s formed along side the main color. Actually – the color is unformed, as it’s the dye coupler itself that has the required color

Take for instance the cyan dye image: if you can come up with a material that’s magenta on its own, but is converted to cyan, then you can create an image that is cyan, while its inverse at the same time remains in magenta. This will then compensate quite nicely for the imperfection of the transmission of the cyan dye itself. The correction won’t be perfect – but it will be (lots) better than nothing.

With the yellow dye, the problem is apparently unsolvable, because no color coupler exists that is magenta by itself and turns into yellow when coupled with oxidized color developer during processing.

The dye image corrections above are crucial if you make color RA4 prints (for instance) using a dichroic enlarger. As we established earler, the light source in such an enlarger isn’t filtered to narrow peaks, but instead filters on fairly broad bands of adjacent wavelengths. As a result, if the dyes in the color negative have absorption in neighboring colors (e.g. the cyan dye blocks some magenta light), this is going to affect the color as they appear on the printing paper. And as I argued above, the corrections formed by the orange mask will be remarkably good – but never perfect.

With scanning, the same thing happens as when printing on color paper. In fact, I suspect the problem is actually inherently much worse, but that it is compensated to a massive extent by digital processing within the camera itself. Let me explain by showing first the color sensitivity of RA4 paper:

FUJIFILM Crystal Archive paper spectral sensivity

Note how the sensitivity of the paper falls fairly neatly into distinct buckets. Especially red (forming the cyan image on the paper) is pretty ‘pure’. Blue and green (forming the yellow and magenta dye images in the print) are more problematic, with rather large parts of the blue-cyan color spectrum activating both layers at the same time.

I expect, but have never looked into very deeply, that some of this cross-sensitivity in the blue/green spectrum might be worked around by the dye transmission of contemporary C41 color films and their color correction masks (see the orange mask story above). But that’s kind of wild guessing on my part, and a bit of a sidenote.

Looking at those RA4 paper sensitivity plots, what does stand out is that the peaks are pretty broad indeed. So any crossover of color filtering from one image color to another will be picked up by the paper. I.e., the paper is inherently quite sensitive to crossovers induced due to unwanted negative film dye transmission outside of their intended spectrum.

Now, why do I believe the problem might actually be even worse (in theory) for digital cameras? Have a look at this plot, which is the sensitivity of a Canon CMOS sensor as found in the EOS600D / Rebel T3i. Yes, it might be long in the tooth by now, but I don’t think there have been very fundamental changes in these kinds of plots given the underlying physics. Besides, if you look at similar plots for several camera types, there are differences, but the overall pattern is quite similar.

Canon EOS 600D / Rebel T3i spectral sensitivity. This is with a regular Bayer RGB filter array fitted over the sensor. [Source]

Remember how RA4 paper is cross-sensitive in the sense that e.g. the blue-sensitive layer will activate to some green light? Well, looking at how a common CMOS sensor performs, the problem appears to be much worse. The red sensor sites will activate to some green and even blue light, the green sensor sites will pick up just about anything in the visible spectrum except blue that borders on UV, and the blue sensor sites are in this example not very selective at all.

So why don’t real-world DSLR images come out as one murky brown mess anyway? That’s what you’d expect from the plot above. The only explanation can lie in complex mathematical filtering of the signal based on calibration data collected by the sensor and/or camera manufacturer. By exposing the sensor to various wavelengths and measuring the R, G and B signals, compensation curves (algorithms) can be made, and those can then be applied by the camera firmware (or RAW converter, perhaps) to obtain a clean(er) R, G and B image of a heavily contaminated, well, mess.

Assuming that this digital correction is perfect, we shouldn’t have to worry too much about it. For all intents and purposes, in a practical setting, I’d happily rely on the aptitude of the Canons and Nikons of this world to get it (pretty much) right. But the physical phenomenon is there, and I can’t shake the feeling that if you can circumvent the problem to begin with, it might actually help. A little.

By now, we’ve crossed over (no pun intended) into ‘new tech’ area. We’ve explored old-style dichroic filtered light sources and encountered the inherent problems of color negative dyes. Could we then use some more ‘new tech’ to inherently do it better? Well, I suspect we could. And again, if makes practical sense – I really don’t know. But theoretically at least, it seems plausible.

Here’s what I’m thinking about: the dye image in a color negative has some unwanted absorption, which is partly but likely not entirely fixed with the embedded/inherent correction mask in the color negative. But the extent of the unwanted absorption problem depends a bit on the wavelength. It’s more problematic in some areas than in others. Given the fact that semiconductor light sources (LEDs, semicon lasers) are fairly narrow-band, couldn’t we try to put this into practice? It seems to me that we could try something like this:

Here, I’ve taken those spectral transmission plots of the imperfect color negative dyes and put them on top of each other. Moreover, I’ve drawn in some vertical colored lines in places that I think make sense. What I’m doing here, is looking for a place where the color we want to record gives a good signal, and the undesired transmission of the other colors is as low as possible.

For the blue color (the yellow dye in the color negative image), this aligns with about a 420nm wavelength. Here, the magenta dye shows the highest transmission (so no unwanted filtering) and the cyan dye also performs reasonably well here. Only looking at the cyan dye, it would be better to pick a much longer blue wavelength, close to around 480nm, but this would come at the cost of greatly increased unwanted magenta/green absorption.

For green (the magenta dye), there seems to be an optimum around 540nm (give or take), where the actual green filtration is the strongest, and the combined and unwanted yellow/blue and red/cyan opacity are the least – although it’s an inherently compromised situation. Still, we can select a bit of an optimum, which seems nice.

For red, it’s a little easier, as anything beyond about 650nm or so will be fairly pure, with little to no unwanted absorption due to the other dyes except the cyan one.

Now, the above optimal wavelengths are made on the basis of some pretty old plots for dyes that are most likely no longer used in color negatives film. The plots look to me like those from maybe 1960s or 1970s dye sets, and they have been re-engineerd a couple of times. The basic principles will be the same, but the exercise would have to be repeated for a more modern dye set. I have not scrounged the web (yet) for more contemporary color negative dye transmission plots. I’m sure they are out there, somewhere. It might be interesting to repeat the exercise with such data.

If you plot the wavelengths I arrived at, 420nm blue, 540nm green and 650+nm red, on the sensitivity of the CMOS sensor I displayed above, you can see that also from that perspective it’s actually a pretty nice compromise. At these wavelengths, the cross-sensitivities are fairly low (relatively speaking; in absolute sense, they are still pretty significant).

But frankly, you’d have to actually consider the specific sensor used in the DSLR that’s to be used for the scanning job. Just like with the dye transmission curves, I just took what I could easily find to show the principle. To do a proper job, you would have to figure out what a good compromise between the optimum peaks for the color negative dyes and for the CMOS/CCD sensor would be. One could then build a light source using LEDs of appropriate wavelengths, and make each color channel dimmable (PWM would probably work just fine, given long enough exposure times).

If you combine this approach of tailoring the wavelength of an RGB light source to the dye sets of today’s C41 films (and perhaps ECN2, if you fancy) and the spectral performance of the camera used, I expect that slightly better color performance could be possible than with a broad-spectrum light source and dichroic filtration.

If you furthermore add an ‘expose to the right’ regime to this, further signal/noise ratio optimization is possible, reducing unwanted chroma noise and further improving color purity.

Well, that’s my paper napkin approach on the issue. I’m aware that there are many points in the argumentation where justified criticism is possible. As they say, the devil is in the detail, and it’s very likely that I’ve overlooked one or two pretty crucial ‘details’ that may spoil the broth. If that’s the case, then at least I hope this post will trigger others to correct my mistakes and come up with a better answer. That would be progress as well. In any case, it would get us past the point where people state that some kind of light source is preferable, mostly because they intuitively feel that this is the case. Intuition is a great inspiration, and if all else fails, we can even use it in decision-making. But a more scientific approach is at least worthy of consideration.

Addendum 11 Dec 2024: I just came across a blog post by Alexi Maschas who reached a similar conclusion, but starting apparently from the practical observation that he found his RGB-illuminated DSLR scans better than broad-spectrum white light illuminated scans. He also had a look at the illuminators of Fuji and Nikon scanners and determined they use narrow-band LED light of around 450nm blue, 540nm green and 650nm red. Funny enough, this is what I ended up using for my RGB enlarger light source as well, except that ‘my’ green is a little lower at 525nm.

Additionally, Maschas refers to a very interesting 2018 report by Flueckiger et al. that goes into scanning solutions for old color film stock. They suggest on page 14 wavelengths of 460nm blue, 525nm green and 680nm red, relying on essentially the same kind of analysis, but more thorough, that I did in a quick & dirty fashion in this blog.

25 thoughts on “Peaks or plateaus – Light sources for DSLR scanning color negatives”

  1. Your conclusions are totally right, let’s hope more people will read and understand them. In professional realm narrow-band RGB scanning is a common thing and nobody argues its benefits. And if they cope with heavily faded archival material, only narrow lights will help to rescue the remaining color information… It’s just priceless.
    According to common sense the broadband light would only give the best colours, but one must understand that in photography the color tones are stripped down into only three channels and through the way to the watcher they must be kept as separate as possible without interfering each other, otherwise you gonna lose the color information. If there’s a crosstalk between them, the saturation of the color tones is lowered in various manner.

    The scanning light for its respective layer must radiate on wavelength the dye has the highest absorption rate. The image sensor should be broadband-sensitive, without any filters. The desired spectrum area to be scanned is then chosen by the light source, not by image sensor itself.

    But with DSLR scanning it’s more complicated. There’s a problem I call Bayer Hell. Spectral response of filters in Bayer mask is chosen to be similar like the sensitivity of cones in eye retina – they overlap (they must overlap, otherwise the color perception would not work). This overlapping leads to another color crosstalk in the way.

    I experiment with RGB LED DSLR scanning about 2 yrs and due to BAYER HELL I found no ultimate choice of LED wavelength for the task of DSLR scanning. There’s only poor compromise there.
    Problemless is RED LED. You can choose 660nm or 630nm with excellent results. The CYAN dye has excellent absorption for RED and only very low absorption for other colors. The GREEN and BLUE filters in Bayer mask have very low pass-through for RED LED light. The red rendition in these scans always amazes me. I mostly scan old color slides, there it works best. When I scan negatives, I noticed that in the areas with intensive red (rendered as cyan in negative) the RED channel can go very dark – so high is red separation. To avoid clipping in red when inverting the negative, you must expose red channel very rich. Red stands out in resulting scan – it must be tamed by lowering its saturation (by mixing little green and blue channel into red channel).

    For GREEN would be ideal 550 nm LED. Usual 525nm LED leaks quite much through into the BLUE-sensitive bayer elements. It worsens separation of the green, which is poor anyway due to stray absorption of magenta dye in blue area. I struggle with it especially in old ORWO color slides. Masked negatives should come out better, but anyway, evil compromise creeps here haha.

    You would be right with 450nm LED as BLUE, but BAYER HELL strikes here again. In your DSLR-sensor-sensitivity spectral graph it is not seen, but in reality, RED sensitive bayer cells also starts to react on light from ca. 450nm below. For vision-like response this is OK, because it enables camera to “see” violet tones – it’s the same like with human vision, where red-sensitive cone cells in retina starts to react violet light along with blue sensitive cells, creating mixture of blue and little red. So with 450 nm LED I got some leak-through into RED channel, which means your blue will be decayed by red.

    Ultimate solution would be to get proper 550nm green LED and debayered camera sensor and take photographs three times – in red, green and blue light and then merge them in photo editor.
    No Bayer, no problem – better dynamic range, less noise, more sharpness, higher resolution, no demosaicing gimmicks.

    1. > In your DSLR-sensor-sensitivity spectral graph it is not seen, but in reality, RED sensitive bayer cells also starts to react on light from ca. 450nm below.
      Yeah, that’s right. There’s quite a few more spectral response charts of dSLR sensors (virtually all Bayer topology) on maxmax.com and they show the issue of blue/UV-crosstalk on the R and G channels as well. There’s more crosstalk, in addition. I guess the only thing that might work is to tailor a system of film(dyes), dSLR and LEDs/light source into a well-tuned combination. Even then, sensor crosstalk may be an issue.

      On a positive note, I’ve seen some recent comparisons of decent/high end scanners and affordable dSLR’s for scanning color film stock and I personally found the results quite acceptable. Were they perfect? Probably not, but I doubt I’d lose much sleep over it.

      PS: I appreciate your thoughtful and detailed responses; it’s a pleasure!

      1. Thanks for your appreciation, I enjoy your articles, too.

        Even if I spotted some issues with RGB LED vs. DSLR I mentioned, I won’t go back (from RGB LED). Just the ability to control RGB lights separately is priceless. To compensate for negative mask or whatever color deviation is very easy.
        RGB LED backlight gives much more saturated colors than continuous light. Even with naked eye you can see the colors of the slide much more vibrant in RGB light. Often I must even lower the saturation in post process. This has advantage – decreasing saturation decreases color noise. RGB LED is just way to go.

        By now I settled with RGB-in-one chips. They may not have ideal wavelengths, but my DIY device suffered from uniformity problems. When I used separate chips (I tried 450 nm, 530 nm, 660 nm), the colors had uneven uniformity and especially at medium format slides was some distractive color shift in different area of the slide. With RGB-in-one chips the color uniformity is much better.

        Color channel crosstalk can be solved digitally – quite commonly I use channel mixer to substract some green from blue (B minus G). This improves green and blue colors rendition. If there’s some leakage from blue to red, it can be compensated by subtracting R minus B. G and/or B should be subtracted from R by some amount. The amount of these compensations must be tested. R/G/B curves are going to get some tweaking, too.
        RAW editors can do miracles. It’s possible to make some preset with all these compensations. Also parameters of DSLRs are constantly improving in terms of noise and dyn. range, which means more headroom for necessary postprocessing.

        Btw. my old film scanner (Nikon LS-2000) has, I think, also some 470/520/630 nm LEDs, which are not ideal for color films. In some cases I must substract Green from Blue by some amount, to achieve natural Blue/Green rendition.

        1. > By now I settled with RGB-in-one chips. They may not have ideal wavelengths, but my DIY device suffered from uniformity problems.
          Yes, I understand; that’s a difficult one to tackle. I generally do this by using small SMD LEDs that are closely spaced, and a diffusor to further even out the light. But it’s really a matter of experimentation. Integrated RGB devices are certainly a little easier in this regard.

          > Color channel crosstalk can be solved digitally
          In fact, you could do a triple-exposure, exposing the dSLR image three times for R, G and B separately, and then re-assemble the images digitally. I’ve not given this much thought, but it might be possible to further eliminate color sensor crosstalk since you don’t have to rely on the in-camera processing for this purpose. You can actually take the B exposure and map it entirely onto the digital B channel and chop off any (by definition) unwanted crossover into the R and G channels. Have you tried this approach? It’s a little more work, of course, but seems theoretically interesting.

          1. Yes, smd chips are nice idea, but I haven’t tried them yet. One has to prepare some PCB for them, there’s a lot of soldering there, so I am hasitative with them. But I think I’ve seen real 550nm chips in SMD market, which is quite appealing, I must say. But my RGB box needs some serious upgrade (thermal issues, brightness control flimsy), so in next version I will probably go SMD.

            I considered triple exposure, but haven’t tried it yet. For practical usage there should be some circuit which would automatically make three sequential shots while iterating RGB colors.
            And another problem is that I am no big friend with all those lightrooms and photoshops. They are too clunky and hard-to-understand for me. My favourite simple photo editor doesn’t allow to take channels from three different sources.

            Moreover I struggle with some more serious problem and it is a noise. My old DSLR Canon 500 is just noisy, esp. in R channel. So my thoughts go now in direction of “oversampling” the image, i.e. to take several exposures and merge (average) the images together. I would then get lower noise, higher dynamics and higher bitdepth. With such an “superimage” it’s easier to make some digital enhancements without quality loss. Along with noise in R channel this channel is also quite unsharp. I guess this is problem of antialiasing filter on the sensor, which affects mostly red side of spectre(?)…

          2. > One has to prepare some PCB for them, there’s a lot of soldering there
            Yes. I always make aluminum-core PCB’s for SMD LEDs and solder the components using a hot plate. It takes a (small) investment in equipment to be able to do this, but it’s really the only way. Well, a soldering oven would work, too, but it’s overkill. You can get the aluminium-core PCB’s made at e.g. pcbway or jlcpcb if you don’t want to DIY them.

            > For practical usage there should be some circuit which would automatically make three sequential shots while iterating RGB colors.
            I considered rigging up something with a microcontroller (e.g. Arduino board) and a hacked cable release. Shouldn’t be too difficult to set up, but it takes some time. I considered doing something like this combined with a small servo setup for automatically photographing/scanning stacks of 35mm slides. You guessed it – haven’t gotten around to it, yet!

            Thanks for posting the RGB vs. white light comparison; it’s a really dramatic difference. But given the curves adjustments, and the fact that you’re always working from a ‘raw’ capture to something presentable, I also find it hard to interpret the difference and figure out which option is superior. I’d have to play around with it to see, and I just haven’t gotten around to this. Mostly for lack of priority I must say, because I don’t scan that much and when I do, I just use one of my old scanners. As long as they get the job done, the DSLR-scanning rig is kind of low on my to-do list.

          3. Thanks for advice with a PCB.
            If you can program MCUs, you are in advantage. I can’t. But this 3-exposure thing can be just done with 4017 counter, 555 astable and couple of transistors and resistors.
            If you want to see the original RAWs, I can e-mail them to you.

          4. Thanks, I’ve had a look at the RAW files you sent me separately. They’re dramatically different, for sure! It also takes a lot of curve manipulation to turn them into normal color images – which is inherent to digitizing color negative film stock, so nothing odd about that. I am surprised at how totally different the RGB vs. white LED captures turn out. I’d really have to dive into the matter to figure out what works best, but one thing is certain: it makes sense to look into the matter. Although for me, it’ll have to wait until the issue presents itself with more priority.

          5. In digital realm one can tweak the curves, color mapping, saturation… to the perfection – no matter how bad negative looks.
            But basically the color curves in properly developed negative should converge, so the need to tweak them separately should be minimal. But of course, once we are with our negative in photoshop, why not use magic of that. People flexing their color negative scans in social media surely do some photoshop tweaking, not possible in darkroom…
            So eventually I tried to simulate darkroom printing with a raw picture scanned in White LED light (which is close to my enlarger with a bulb lamp and set of gelatine filters). It means I didn’t touch color curves, I only preadjusted overall contrast similar to RA-4 paper print and adjusted only RGB and brightness.
            This is what I would probably get in the darkroom: https://imgur.com/a/tlE0BhA

          6. > But basically the color curves in properly developed negative should converge
            I interpret ‘to converge’ as something like ‘remain parallel’ or perhaps ‘overlap’ – converging curves would suggest crossover, in my mind. Anyway, there’s no fixed relationship between color negative and digital images. The only thing that approaches this would be a calibration curve that takes a calibrated input (e.g. a color checker card photographed and processed under tightly controlled conditions) and matches the digital output back to the original. Some idiosyncrasies will remain in the final result; these are the cumulative effective of imperfections in the film, equipment, digital stuff etc. It’s never going to be a 100% match.

            To be frank, I’ve given up the whole concept of absolutely correct color rendition in photography. It’s a figment, really. And that leaves me, personally, with the comfortable conclusion that as long as it looks pretty to me, it’s good enough!

            PS: I can’t really comment on your RA4-simulation – I’d have to actually print the negative to see how it would turn out. I suspect contrast will be (much) higher in an actual RA4 print. In one respect, your simulation is eerily (while probably unintendedly) accurate – it shows a distinct crossover with magenta highlights. Well, it’s nowhere near as pronounced as this in the RA4 prints I make, but the principle does hold!

          7. Yes, I meant the curves should be parallel, not convergent. But yes, in reality they more or less do converge and/or diverge : )
            With my RA-4 “simulation” you are right with contrast – is way too low (I just didn’t want to be too optimistic), but color-wise, I think, it will come out similar in actual print. I did similar simulations before and they matched fairly. Like I said, I didn’t touch color curves, so that I was not able to compensate for crossovers.
            That crossover is gonna likely be there, but I can later decide to hide it in shadows (make highligts correct, so the color shift will appear in shadows) and not let it stay in highlights.
            I didn’t mention that the negative was pushed at least by 1 stop and the temperature was not exactly 41°C, but around 35°C. So some crossover can be there actually.

      2. * oh, I made mistake in (G and/or B should be subtracted from R by some amount). G and B should be added (+) to R, to decrease the saturation of reds.

    2. Here’s some comparison – negative Kodak vision 3 ECN-2 developed, scanned with DSLR into RAW, with White LED backlight and with RGB (470/530/630nm) backlight. Saturation not adjusted, no channel mixing…, only RGB curves tweaked. You see RGB scan the red go to extreme. White LED scan looks more true-to-reality, but in fact RGB scan can be desaturated to normal look too, which has benefit of color noise reduction…
      https://imgur.com/a/JTbL9Iq

  2. If you’re interested, here is spectral dye density curves of Kodak Vision (other negative film datasheets show only overall mid scale neutral grey spectrogram):
    https://imgur.com/a/kFpMWfp
    IMO it looks too good to be true, but this is probably because it’s already masked out result, not dye absorptance alone. It’s logical, because it’s not possible in spectral measurement to isolate dyes from mask dyes. At the toe of magenta dye is nice undershoot below zero, indicating substraction by mask dye in that point.
    Interesting is also that “minimum density” curve – there are several peaks there. Do you think the peaks belong to mask dyes? If there were 5 masks there, it would be really impressing.
    So we don’t know either, what is real absorptance curves of modern-day dyes. But if they were ideal, there would be no need to mask them.
    I also checked out datasheet of Kodak Aerocolor IV, which is only unmasked negative film produced by Kodak. But it only shows neutral grey spectrogram, not isolated dyes. But if you have some spectrograph, you can measure this film by yourself.
    Anyway, I’m gonna try this film out – it’s sold as Santacolor by third party, who bought a batch of this film from Kodak and spooled it in cartridges. I am interested especially in how it’s gonna work in optical RA-4 print.

    1. Thanks for posting that graph; in case the link becomes unavailable, it can also be found in any Kodak Vision3 datasheet.

      The undershoot of the magenta dye is a bit odd; it would suggest a negative density – i.e. more light exits the film than enters it. Fluorescence comes to mind – or simply (more likely) a measurement anomaly.

      The peaks in the minimum density plot I can’t explain; they’re obviously part of the mask in some way, but whether they’re intentional or not, I couldn’t say. They look like the interaction between the different dye colors to me, given that they align with the point where different dye curves cross each other. I don’t doubt at all that modern dyes are imperfect just like the ones that formed the basis of the references I gave in my blog. I don’t think there are ideal dyes that work in this application, nor that sufficient R&D budget has been allocated over the past 20 years, or will be allocated in the future, to finding such ideal dyes.

      Be sure to post back how the Aerocolor film works for you. It’s a fascinating product for sure!

      1. I think the undershoot of magenta in the graph just means that its “stray absorptance” is slightly overcompensated by the mask in that area, i.e. that the mask there is slightly more dense than it should ideally be, so after normalisation the curve goes virtually under zero. But this maybe happens in the case of blank negative, maybe at higer densities the mask performs better (or worse), becaue the masking image can slightly differ in gamma from its counterpart…

        All I know is that the films (used to) have two masks – one yellow for blue side of magenta layer and one red(dish) for blue + green side of cyan layer. But this is some 40 yrs. old news, so maybe meanwhile they could add more masks.
        I don’t know where I can find some more up-to-date resources on this topic…

        1. Me neither; I suspect that the whole masking situation hasn’t changed all that much. But the little I know about it, is all based on C41. I don’t know if different masking approaches were ever developed for ECN2 films, for instance. I suspect they borrowed as much technology as possible from C41, but I’m really not sure.

          1. I happened to find some more detailed information about masks here:
            https://www.kodak.com/content/products-brochures/Film/Processing-KODAK-Motion-Picture-Films-Module-7.pdf — on side 7-3 Film Structure.
            It says there are two masks there – yellow and reddish (“pink”), like I wrote before.

            Yellow mask is created by pre-colored (pre-yellowed) magenta coupler. Where’s there magenta dye is created during development, there the yellow colour is “consumed”. Where not, the yellow stays. Very clever. However, only part of the magenta coupler is pre-colored, because gamma of the mask must match gamma of the area being masked.
            Same with cyan, but it has red precolored coupler. Red is chosen for simplicity there, because the unwanted blue and green absorption of roughly the same. Again, only part of cyan coupler is precolored.

            I managed to find exactly the same information in the two older books about color photography theory I have – from 1978 and from 1953. Both mention the same principle of creation of embedded mask, both mention yellow and reddish mask. No more, no less.
            The book from 1953 says that there were “state-of-art” masked negative films available already back then. Masking is process-independent, it can be employed in existing color process without need to change it.
            It was written there, that due to many duplicating steps in cinematography, it was not possible to achieve usable colour without masking after more than 3rd duplicating step. In cinematography the masking was just necessity (if the other color system was not chosen – like technicolor).

          2. Great detective work, and very useful indeed! I take away from this that masking is conceptually identical across C41 and ECN2 films, and that they initially served the purpose in particular to allow acceptable color reproduction in imaging systems with several duplication steps. This also answers the question (at least in part) why the situation is so different in positive films (slide/E6 etc.) – in addition to a visible mask of course being undesirable in that case to begin with.

          3. Conclusion is that masking of still image negatives was borrowed from cine technology.
            First tripak chromogenic films were reversal. But film industry was major motivation in research and development of negative-positive process in color, because they wanted to duplicate, edit and print movies, like they were used with BW films. Negative-positive was crucial for film industry.

            Slide itself is OK, the errors are still small to be annoying (but Kodachrome showed that slides could be even better, of course), but in negative-positive print process, the errors of both film an paper dyes alredy sum up. Prints from this era were not appealing – they were expensive and had somewhat bleak colours.

            Shooting masses still had to wait for their sweet colourful paper pics (masked negative print films, fast minilabs, cheap prints on better photographic papers and big business around) – they were probably on the second seat for film manufacturers.

            Professional photography didn’t have demand for masked films, they were satisfied with their slides (in printing process the slides are duplicated into component BW copies using narrow band filters, so here the duplicating didn’t add too much color problems).

          4. Who knows; I never looked into the chronology or the causality. The notion that the cine industry was a driving force behind color film development certainly isn’t a very far-fetched one. I do think that given the way still and cine film production and R&D were entangled in the major firms involved in it, it’s a bit of a chicken-egg story. It’s probably hard to figure out which application was leading, and maybe it is more accurate to think in terms of a corporate R&D effort with different applications spinning out of it at different moments in time. This view of innovation, as a corporate-R&D led effort, is fairly accurate for the period and industry we’re talking about.

          5. Maybe it’s not fair from me to tell that occasional/amateur shooters were on back seat. When I imagined the kilometres of film stock involved in production and distribution of any mid-scale feature movie and I told myself, who’s the boss here? – some holiday shooter, or the studios? But perhaps I was wrong, since holiday shooters are millions of them, which makes a huge market.
            But for sure I am not inventor of the story of movie industry motivating color negative process. This is what I learned from the 1953 book mentioned earlier…

  3. Recently I and my friend were comparing DSLR scans of a slide using white LED and RGB LEDs. The results were different not only in saturation, but also in rendering of color hues.
    As slides are designed and balanced to be directly observed in continuous white light, I had to take a scan from white light as decisive…
    DSLR sensor has some built-in spectral response and taken raw image has to be converted into appropriate colorspace. This default conversion is done either in camera (shooting in jpeg) or, when we shoot RAW, is done automatically during opening RAW file in RAW editor.
    For every camera there is model-specific formula for calculation of RGB channels from RAW data to desired colorspace. This calculation is designed so that the displayed image on specific display device (roughly) matches the scene as if it were directly observed by eye.
    This default conversion formula however counts with continous light sources, so it means that the colours taken under RGB lights will come out distorted with this default conversion.
    So there must be some recalculation done to adapt the conversion the non-continous light sources. It would be fine if in RAW editors were some presets for that.
    This RAW conversion is basically done by adding and/or subtracting the channels between each other. It can be done in every photo editor (function known as “channel mixer”), however one must know exact values, which must be probably found out using some measure slide with color chart.
    With DSLR RGBLED scanning of the color negative it is much more complicated, because there must be taken into account not only spectral response of the image sensor, but also spectral response of the film dyes and spectral response of the film itself. All these circumstances affect the resulting digital image. The best method would be to photograph some color chart on the negative and then using channel mixer to modify the image in the editor, so that it matches the color chart photographed. And save this channel mixer configuration as preset to apply it on further negative scans.

    1. I agree that scanning slides may be a different ballgame from negatives. In my post, I mostly thought of color negative, and what you say about slides and how dSLR’s can struggle with discontinuous light sources is certainly valid.

      Your idea to do a kind of calibration using a color checker can certainly work; it should get you close as long as a couple of factors are controlled for – i.e. film used, lighting conditions, consistent development, etc.

      1. Yes, I also think this recalculation would be film-specific. After all, scanning software often comes with presets for major film brands/types.
        I only mentioned the problems of color translation of course. Didn’t mention the gamma curve we must find and apply. The situation here is similar as with colorspace transformation – the RAW editor applies “eye” curve to linear RAW input.
        But a negative has a flattened curve, so we must apply custom curve on it.
        I got an idea that instead of matching of actual color checker with photograped and DSLR-scanned one displayed on the screen, we can use a different method of calibration:
        We can photograph a color checker on our favourite color negative film, develop the film with our favourite process, make an optical RA-4 print out of it, then scan the negative using our favourite RGB LEDs using our favourite DSLR and then match the scan on the screen with the RA-4 print for color and gamma. We get candid replication of real photograph. Negative color print film is just intermediate built to be tranferred and interpreted through color photographic paper. If somebody just scans a negative and then arbitrarily manipulates it in photoshop, he can get maybe impressive results, but it wouldn’t be “real” anymore.

Leave a Reply

Your email address will not be published. Required fields are marked *