Many people like to shoot film, but enjoy their images in the digital domain and the possibilities it warrants for post-processing, printing, etc. This means that the film images need to be scanned, and using a DSLR to do this has become very popular in recent years. But it does bring up some questions – mainly: what’s the best way to do this? And in particular, if you photograph a backlit negative, what kind of backlight would be preferable? I’m going to reflect on this for a bit on a theoretical basis.
So let me preface by saying that I don’t have the ultimate answer. I have half a mind of actually setting up an experiment, but there’s so many of those that I’d like to do, so no promises.
Also, I’m going to limit my reflection to C41 film, and maybe ECN2 – so color negative. I expect that most of the argument will also go for E6/slide film (and probably K14 etc.), but I’ve honestly never put much thought into that.
What I’ll try to go in mostly is the question how relevant a continuous light source and filtered derivatives of such a continuous spectrum are for digitizing color negative films. I think this is the pertinent question, because the importance of a continuous light source is often brought up when it comes to DSLR scanning, but I’ve never actually seen it backed up by a theoretical argumentation. And the lack of that explanation makes me somewhat weary/skeptical – it just happens to be my nature.
One more caveat: I’ll assume in the following that it’s somehow advantageous to be able to adjust the color of the light used to digitize film with a DSLR. I actually have severe doubts about the relevance of this. I think that the output of any decent digital camera will be just fine if you photograph a negative with plain, white light (of any reasonable color temperature), provided recording is done with a high bit depth (nominally 16) and none of the color curves are being truncated either on the dark or (especially) the light side. Post processing consists of inverting and balancing the channels, and the results will come out fine provided the negative is properly exposed and processed to begin with. I’ve done exactly this using a scanner instead of a DSLR (I doubt the difference is all that relevant in this context) and it really works just fine.
So all considered, I’m setting up mostly for an argument for sake of the argument, really. If I were to fashion a DSLR scanning setup, I would just pick any old white LED panel, film holder and camera stand and have at it. No need in my mind to complicate things any further.
There is one conceivable theoretical argument for using a filtered light source: if you color balance the light source, you can (at least theoretically) get each color layer to overlap pretty much perfectly in the histogram, and that in turn will allow you to ‘expose to the right‘ to optimize S/N ratio. Since the color layers in a typical color negative image are quite different in terms of relative densities, this approach makes sense from a theoretical viewpoint. See this digital mockup I made:
So if we assume it’s a good thing to have a filterable light source, along the lines of the argument above, what kind of approach would be preferable?
An obvious choice would be between ‘old tech’ and ‘new tech’, which for all intents and purposes is either a dichroic filtered light source scavenged from a color enlarger (old tech), or an RGB LED setup with individually controllable channels (new tech).
It seems that part of the rationale behind using a dichroic color head is an intuitive one: dichroic color heads have always been associated with printing color negative (and in the days Ilfochrome: color positive) film. They must be a natural match, or even a match made in heaven, right? After all, we’ve been working with that combination for decades, so there must be something to it.
I personally think this match is more a case of opportunistic co-evolution than of the best possible match from a technical viewpoint. To expose a color negative (e.g. C41) image onto a color negative paper (e.g. RA4), you need to have some control over the relative exposure of the three different color images. There are two ways to do this color filtration: additive or subtractive. What they have in common, at least in ‘old tech’ as it was/is used in most color darkrooms or even analog minilab printers, is that they use a broad-spectrum light source (or several, as in some additive machines) and three band-pass (additive) or band-gap (subtractive) filters.
Ok, hold on for a little theory that you’re probably familiar with, but for those who aren’t maybe it helps, or it may act as a refresher.
Band-pass or band-gap filters are called thus because they pass or block a fairly broad band of wavelengths. E.g. a red band-pass filter may pass anything between let’s say 640nm and 700nm or so, a green band-pass filter may pass anything between 510nm and 600nm and blue may pass 490 up to around 500nm. These are actually he approximate numbers of the Wratten #70, #99 and #98, respectively. Furthermore, the edges of those bands can be quite steep, but don’t necessarily have to be.
The dichroic C, M and Y band-gap filters in a typical color head have similar cutoff frequencies, but the transmission is inverted compared to RGB filters. I.e. a magenta dichroic filter will pass anything from the lower UV-A limits (350nm or so) up to about 500nm, and anything above about 600nm, blocking the band between 500nm and 600nm. Btw, there’s a very nice picture on this in a Thor Labs pdf I found. Note that in the image below, what’s ‘high’ in the graph represent light that’s being passed. A ‘low’ line denotes light being blocked. So that’s why I refer to the magenta filter as band-gap filter: it creates a ‘gap’ in the light spectrum by blocking light between 500nm and 600nm.
Keep in mind we’re talking about light sources here, and how to manipulate the color of the light we shine through a piece of film.
The contemporary alternative to these filters, with inherently broad transmission spectra, would be lasers (chemical) or semiconductor light sources (LEDs or semicon lasers) with narrow peaks. For instance, the emission spectrum of a red LED might look something like this:
Note the difference between this narrow peak, and the broad spectrum that’s passed by a red filter. Note that both will look pretty ‘red’ to our eyes (although you will see a difference if you compare them directly). They’re fundamentally different.
You have to realize that such narrow peak light sources, or even filters with a very narrow band-gap or band-pass behavior, have historically been difficult to make and hence were not really available to the mass market. So when color negative materials rose to popularity in the 1960s or so, the logical choice would be to use not-so-perfect filters, and preferably dichroic ones because those also allowed to use just a single light source without very complicated mirror optics to split out three different beams (open up an additive Philips PCS color head, and compare it to any Durst, Leitz etc. dichroic head and you’ll understand). For practical and economic reasons, we had to rely on pretty broad (i.e., non-selective) spectra for color filtration.
Add to this the nature of chromogenic dyes, which as you might remember is also not as clear-cut as they might have ideally been from a theoretical viewpoint. The ideal chromogenic dye blocks an entire color (a fairly broad spectrum) with great efficacy, and leaves all the rest of the spectrum alone. In reality, things don’t quite pan out that way, of course. Maybe you’re familiar with these images from a website that explains why there’s an orange mask on color negative film:
The ideal transmission curves for these dyes would look more like the filter behavior curves from the Thor Labs pdf: they would pretty much look like buckets, with a flat bottom smack at the horizontal axis, very steep walls (dead vertical would be nice), and a stable plateau right at 100%. In reality, it looks more like a Tuscan landscape (save for the cypresses).
Put into words, the problem is that the dyes, particularly cyan and magenta, also absorb colors in adjacent areas. So the cyan dye also absorbs some green light (and thus plays a role in forming the magenta layer in an RA4 color print), and the magenta dye also creates some image information in the blue area (contributing to yellow image formation).
This problem is to an extent overcome in color negatives by the orange mask. This mask is not just a constant orange (or in reality, brown) image, but part of it is actually image-dependent. The cyan and magenta images on a color negative film consist in part of a magenta resp. yellow inverted dye image that’s formed along side the main color. Actually – the color is unformed, as it’s the dye coupler itself that has the required color
Take for instance the cyan dye image: if you can come up with a material that’s magenta on its own, but is converted to cyan, then you can create an image that is cyan, while its inverse at the same time remains in magenta. This will then compensate quite nicely for the imperfection of the transmission of the cyan dye itself. The correction won’t be perfect – but it will be (lots) better than nothing.
With the yellow dye, the problem is apparently unsolvable, because no color coupler exists that is magenta by itself and turns into yellow when coupled with oxidized color developer during processing.
The dye image corrections above are crucial if you make color RA4 prints (for instance) using a dichroic enlarger. As we established earler, the light source in such an enlarger isn’t filtered to narrow peaks, but instead filters on fairly broad bands of adjacent wavelengths. As a result, if the dyes in the color negative have absorption in neighboring colors (e.g. the cyan dye blocks some magenta light), this is going to affect the color as they appear on the printing paper. And as I argued above, the corrections formed by the orange mask will be remarkably good – but never perfect.
With scanning, the same thing happens as when printing on color paper. In fact, I suspect the problem is actually inherently much worse, but that it is compensated to a massive extent by digital processing within the camera itself. Let me explain by showing first the color sensitivity of RA4 paper:
Note how the sensitivity of the paper falls fairly neatly into distinct buckets. Especially red (forming the cyan image on the paper) is pretty ‘pure’. Blue and green (forming the yellow and magenta dye images in the print) are more problematic, with rather large parts of the blue-cyan color spectrum activating both layers at the same time.
I expect, but have never looked into very deeply, that some of this cross-sensitivity in the blue/green spectrum might be worked around by the dye transmission of contemporary C41 color films and their color correction masks (see the orange mask story above). But that’s kind of wild guessing on my part, and a bit of a sidenote.
Looking at those RA4 paper sensitivity plots, what does stand out is that the peaks are pretty broad indeed. So any crossover of color filtering from one image color to another will be picked up by the paper. I.e., the paper is inherently quite sensitive to crossovers induced due to unwanted negative film dye transmission outside of their intended spectrum.
Now, why do I believe the problem might actually be even worse (in theory) for digital cameras? Have a look at this plot, which is the sensitivity of a Canon CMOS sensor as found in the EOS600D / Rebel T3i. Yes, it might be long in the tooth by now, but I don’t think there have been very fundamental changes in these kinds of plots given the underlying physics. Besides, if you look at similar plots for several camera types, there are differences, but the overall pattern is quite similar.
Remember how RA4 paper is cross-sensitive in the sense that e.g. the blue-sensitive layer will activate to some green light? Well, looking at how a common CMOS sensor performs, the problem appears to be much worse. The red sensor sites will activate to some green and even blue light, the green sensor sites will pick up just about anything in the visible spectrum except blue that borders on UV, and the blue sensor sites are in this example not very selective at all.
So why don’t real-world DSLR images come out as one murky brown mess anyway? That’s what you’d expect from the plot above. The only explanation can lie in complex mathematical filtering of the signal based on calibration data collected by the sensor and/or camera manufacturer. By exposing the sensor to various wavelengths and measuring the R, G and B signals, compensation curves (algorithms) can be made, and those can then be applied by the camera firmware (or RAW converter, perhaps) to obtain a clean(er) R, G and B image of a heavily contaminated, well, mess.
Assuming that this digital correction is perfect, we shouldn’t have to worry too much about it. For all intents and purposes, in a practical setting, I’d happily rely on the aptitude of the Canons and Nikons of this world to get it (pretty much) right. But the physical phenomenon is there, and I can’t shake the feeling that if you can circumvent the problem to begin with, it might actually help. A little.
By now, we’ve crossed over (no pun intended) into ‘new tech’ area. We’ve explored old-style dichroic filtered light sources and encountered the inherent problems of color negative dyes. Could we then use some more ‘new tech’ to inherently do it better? Well, I suspect we could. And again, if makes practical sense – I really don’t know. But theoretically at least, it seems plausible.
Here’s what I’m thinking about: the dye image in a color negative has some unwanted absorption, which is partly but likely not entirely fixed with the embedded/inherent correction mask in the color negative. But the extent of the unwanted absorption problem depends a bit on the wavelength. It’s more problematic in some areas than in others. Given the fact that semiconductor light sources (LEDs, semicon lasers) are fairly narrow-band, couldn’t we try to put this into practice? It seems to me that we could try something like this:
Here, I’ve taken those spectral transmission plots of the imperfect color negative dyes and put them on top of each other. Moreover, I’ve drawn in some vertical colored lines in places that I think make sense. What I’m doing here, is looking for a place where the color we want to record gives a good signal, and the undesired transmission of the other colors is as low as possible.
For the blue color (the yellow dye in the color negative image), this aligns with about a 420nm wavelength. Here, the magenta dye shows the highest transmission (so no unwanted filtering) and the cyan dye also performs reasonably well here. Only looking at the cyan dye, it would be better to pick a much longer blue wavelength, close to around 480nm, but this would come at the cost of greatly increased unwanted magenta/green absorption.
For green (the magenta dye), there seems to be an optimum around 540nm (give or take), where the actual green filtration is the strongest, and the combined and unwanted yellow/blue and red/cyan opacity are the least – although it’s an inherently compromised situation. Still, we can select a bit of an optimum, which seems nice.
For red, it’s a little easier, as anything beyond about 650nm or so will be fairly pure, with little to no unwanted absorption due to the other dyes except the cyan one.
Now, the above optimal wavelengths are made on the basis of some pretty old plots for dyes that are most likely no longer used in color negatives film. The plots look to me like those from maybe 1960s or 1970s dye sets, and they have been re-engineerd a couple of times. The basic principles will be the same, but the exercise would have to be repeated for a more modern dye set. I have not scrounged the web (yet) for more contemporary color negative dye transmission plots. I’m sure they are out there, somewhere. It might be interesting to repeat the exercise with such data.
If you plot the wavelengths I arrived at, 420nm blue, 540nm green and 650+nm red, on the sensitivity of the CMOS sensor I displayed above, you can see that also from that perspective it’s actually a pretty nice compromise. At these wavelengths, the cross-sensitivities are fairly low (relatively speaking; in absolute sense, they are still pretty significant).
But frankly, you’d have to actually consider the specific sensor used in the DSLR that’s to be used for the scanning job. Just like with the dye transmission curves, I just took what I could easily find to show the principle. To do a proper job, you would have to figure out what a good compromise between the optimum peaks for the color negative dyes and for the CMOS/CCD sensor would be. One could then build a light source using LEDs of appropriate wavelengths, and make each color channel dimmable (PWM would probably work just fine, given long enough exposure times).
If you combine this approach of tailoring the wavelength of an RGB light source to the dye sets of today’s C41 films (and perhaps ECN2, if you fancy) and the spectral performance of the camera used, I expect that slightly better color performance could be possible than with a broad-spectrum light source and dichroic filtration.
If you furthermore add an ‘expose to the right’ regime to this, further signal/noise ratio optimization is possible, reducing unwanted chroma noise and further improving color purity.
Well, that’s my paper napkin approach on the issue. I’m aware that there are many points in the argumentation where justified criticism is possible. As they say, the devil is in the detail, and it’s very likely that I’ve overlooked one or two pretty crucial ‘details’ that may spoil the broth. If that’s the case, then at least I hope this post will trigger others to correct my mistakes and come up with a better answer. That would be progress as well. In any case, it would get us past the point where people state that some kind of light source is preferable, mostly because they intuitively feel that this is the case. Intuition is a great inspiration, and if all else fails, we can even use it in decision-making. But a more scientific approach is at least worthy of consideration.
Addendum 11 Dec 2024: I just came across a blog post by Alexi Maschas who reached a similar conclusion, but starting apparently from the practical observation that he found his RGB-illuminated DSLR scans better than broad-spectrum white light illuminated scans. He also had a look at the illuminators of Fuji and Nikon scanners and determined they use narrow-band LED light of around 450nm blue, 540nm green and 650nm red. Funny enough, this is what I ended up using for my RGB enlarger light source as well, except that ‘my’ green is a little lower at 525nm.
Additionally, Maschas refers to a very interesting 2018 report by Flueckiger et al. that goes into scanning solutions for old color film stock. They suggest on page 14 wavelengths of 460nm blue, 525nm green and 680nm red, relying on essentially the same kind of analysis, but more thorough, that I did in a quick & dirty fashion in this blog.