Trans-mission – A parts bin transmission densitometer

I’ve never owned a transmission densitometer. In fact, I consider myself as far too sloppy and undisciplined in the darkroom to be able to put one to good use – so why bother? Then again, sometimes it’s just convenient to be able to take a few quick and dirty measurements on a piece of film. And there’s always the parts bin full of stuff that surely could be fashioned into a densitometer, right? Right.

The immediate ‘need’ for a transmission densitometer arose a few weeks ago as I was experimenting with chromium intensifier for carbon transfer printing. That’s a story for a different blog, but the short take-away is that I needed something a little more systematic than “a dash of this and a squirt of that”. So I found myself repeatedly scanning some x-ray step wedges side by side with a Stouffer step wedge to determine optical densities. Hey, it works…but it gets tedious, pretty fast.

So I figured a ‘proper’ transmission densitometer would be nice. But I couldn’t be bothered to go on eBay&co to find one. I assume (perhaps wrongly) that many of the densitometers in the second hand market are (1) decades old, (2) have long since become separated from the calibration strips they need to remain accurate and (3) rely on old replacement light bulbs that are no longer obtainable. Oh, I also suspect that you’ll have to shell out a couple of hundred Euros for all these niceties. Maybe, one day – but not today.

How about that parts bin, then? One thing I knew I’d find in there was a simple TCS34725 color sensor module. A few LEDs and a microcontroller are easy to find, too. In principle, if you take an LED, shine light through a sample and measure that with the TCS34725 color sensor, you should be able to work out optical density. At least if you compare the measurement against a reference measurement. So the theory of operation is simple enough.

Since the TCS34725 is actually a color sensor, I figured I might as well make a color densitometer. Instead of using a white LED, I took an RGB LED instead and gave it a try. Using a breadboard setup with an Arduino Nano I was able to work out that the RGB LED I used was a nice starting point, but woefully underpowered to be able to measure densities beyond 2.0 logD or so.

Test objects; some 3D printed fittings for a light source, and the RGB LED light source on a piece of copper PCB for initial tests.

I also figured that the light path is actually the challenging bit. The basic approach I chose is as follows:

I chose to put the light source / LEDs at the bottom, then some kind of aperture plate on top of that to create a measurement spot. The film/sample sits on top of that plate. A corresponding aperture in a plate on top of that allows the light through to the sensor, which sits on a PCB positioned over the top aperture plate.

The principle is simple; the practical details a little more complicated. In part this is the case because the top assembly must be movable so that one can position the film/sample over the measurement aperture accurately. When the sensor is positioned back in place, the sensor aperture needs to align perfectly with the light source aperture – so part of the assembly needs to be movable as well as rigid at the same time.

Then there was the issue of the LEDs not being underneath the aperture, and LEDs being directional light sources to begin with. I chose a measurement aperture of around 2mm, but how to get the light from the LEDs to shine neatly through the aperture? Initial tests suggested problems in this area. I chose to create some distance between the LEDs and the measurement aperture so that the light from the LEDs has some chance to ‘fan out’. In the end, I also applied some diffusion on the actual apertures (both the sensor and light source apertures) to reduce the illumination hot spots and make the whole contraption more robust and less prone to slight physical alignment errors.

In the ‘finished’ device, it looks like this:

Three RGB LEDs are mounted on the green PCB, underneath the black chimney-like object. This chimney acts as a sort of light integrator and it has an aperture (ca. 2mm) on top. Note also an array of white (seemingly yellow) LEDs around the actual measurement light source; these are the film illumination light source that help positioning the sample over the measurement spot. Over this area lies a diffusor sheet, which is really just a piece of translucent white plexiglass with a hole in the middle, and a little piece of diffusion material (in fact some screen printing inkjet film held in place with an elegantly ripped off piece of purple masking tape) to make the measurements a little more consistent.

The sensor assembly sits in an arm with a hinge, and the underside looks like this:

The white aperture in the middle sits on top of the light source aperture shown in the previous images. I used a piece of felt padding around the aperture to keep out light seeping in from the sides. A piece of white diffusion material is placed over the measurement aperture to ensure even illumination of the (hidden) sensor behind it.

The sensor element ‘falls’ on top of the light source below like this:

Yes, it’s a small device. It’s OK for formats up to 4×5″, but it can’t reach far beyond the borders of an 8×10″ negative.

That was the tricky bit. Well, one tricky bit out of two, as it turned out. The other tricky bit was that in my wisdom, I chose to use an STM32F103 microcontroller because I figured that it would have plenty of computing power to deal with these relatively simple tasks:

  • Control the measurement light source, i.e. 3 LEDs and an associated variable current source.
  • Read out sensor data from the TCS34725 (I2C interface)
  • Toggle the white background illumination.
  • Read out a button and a rotary encoder for user input.
  • Output measurement data etc. to a tiny OLED (I2C again).
  • Output measurement and debugging data over UART.

Turns out this wasn’t as simple as I had envisioned. The initial proof of concept setup I made using an Arduino Nano and programmed it in Arduino. This makes for quick testing, since Arduino is fairly user-friendly. But Arduino is also kind of top-heavy due to its high abstraction level: there’s a lot of stuff going on in a multitude of libraries that hide the hardware complexity from the programmer. That works fine if you use something like a Nano, (i.e. Microchip ATMega328P) for which the Arduino code is optimized quite well.

As it happens, the Arduino implementation of STM32 is kind of shoddy and ginormously inefficient. The approach STM chose to open up their STM32 series of microcontrollers to the Arduino world is to take their high-abstraction level HAL (Hardware Abstraction Layer), which in itself is a water-headed and rather inefficient set of libraries to make the hardware functions of the microcontroller accessible to a user. Then, for the Arduino part, they programmed Arduino-compatible libraries essentially on top of that HAL, making the whole thing even more inefficient.

The net result is that when I tried to port my working Arduino proof of concept from a Nano to the STM32F103 I chose (essentially twice as much memory as an ATMega328P), I ran out of program memory before I had even the sensor and the display routines working at a very basic level – let alone any other stuff that needed to be done. So back to the drawing board.

Cutting a very long story short: I ended up programming the device in STM32Cube using a mix of C++ and C, totally bypassing the inefficiency of HAL and basically rewriting all of the low-level stuff using direct register access. Yes, including an I2C stack, PWM routines, ADC, UART etc. etc. I also ended up replacing most of the floating point math with integer math where humanly possible. Curses, curses, and a whole lot of work – but the end result is massively more efficient than the HAL + Arduino bloatware approach that evidently didn’t work.

The ‘finished’ device uses roughly half of the available RAM and a little over half the available program memory. The Arduino prototype couldn’t manage to fit both the display and sensor routines in the program memory at the same time, let alone all the rest that needed to be done. I guesstimate that the low-level approach I took resulted in about 10-20% of the code size the Arduino approach would have taken. I could of course just have used a massively oversized microcontroller with oodles of memory instead and accepted all the inefficiencies of Arduino – but where’s the fun in that!?

Ok, so back to the hardware for a minute. The overal system design is as follows:

I mentioned a variable current source for the measurement LEDs. This was a bit of an experimental approach which I figured would be handy because I found it difficult to estimate the relative power I would have to run the three LEDs at. After all, there was the spectral sensitivity of the sensor to keep in account, as well as the physical properties of the light path that would attenuate one color more than another due to…well, factors.

This is why I decided on some kind of way to regulate the LED current. I could have just PWM-ed them and frankly, there’s very little/nothing against that approach. But hey, experimental moods and all that. So I made a little servo circuit that uses a bias that’s set by a PWM signal (run through a triple RC circuit to create stable DC) from the microcontroller.

A simple uC-controlled, low-power, variable current LED driver

I use a single LED driver circuit for all three LEDs because the LEDs don’t need to be on at the same time, anyway. So they are individually switched on with a low-side MOSFET switch, and the desired current can be set prior to switching on an individual LED. It’s a simple system and it works flawlessly in practice. To offer a bit more control, I also fed the shunt voltage from the LED driver back into an ADC port of the microcontroller so I can actually set the desired current in the software and have the microcontroller iterate its way towards that desired current. Not really necessary, but neat, nonetheless.

The main PCB is a straightforward affair; it’s a double-layer design with no particular bells & whistles like I often make them at home:

Main PCB, top side, after etching and before drilling and applying the solder mask
Main PCB with components mounted. The grey flat cable at the bottom with the JST connector is the UART connector that also supplies 5V or 3.3V to the PCB. The black box connector to the right goes to the user interface daughterboard and the light sensor. The three measurement LEDs are visible on the left side of the PCB, in the center of the ring of white backlight LEDs. The variable current servo LED driver is at the center top with three little MOSFETs (AO3400) below to switch the individual LEDs. The DC-DC step up converter for the white LEDs is just visible between the black box connector and the grey flat cable, top right on the PCB.

The finished hardware package is quite compact and light-weight. This is owing mostly to the use of 3D printed parts for the ‘frame’; printing large and voluminous parts takes a long time and I’m not really that patient. The aspect ratio is a little funny in the photo below; my phone does weird things when making close-up photos. In reality, the device is more stretched out, like the PCB shown above.

The main PCB is inside at the bottom of the device. The frame is open, but I taped along the side where the LEDs are because the backlight LEDs are so bright. The user interface PCB is at the top, on the arm that can swing up and down; it’s shown propped up halfway here. The big yellow button is to zero in the device and to take measurements. The rotary encoder is there because…well, a project needs a rotary encoder!

In all seriousness, when the hardware was done, I wasn’t quite sure yet what the rotary encoder was supposed to do. I ended up using its button/key for toggling linearization (see below) and the encoder bit for setting the number of samples to take in each reading. I’m glad I added it, in hindsight.

In terms of software, I mentioned that I ended up doing everything the bare metal way in STM32Cube without the use of HAL, and in fact even skipping the Low Layer drivers provided by STM in many cases.

One challenge in a transmission densitometer is dealing with dynamic range. logD numbers are deceptive – a range of 0 through 3.0logD doesn’t like quite as dramatic as it is in reality, i.e. a contrast range of about 1:1000. This means that a measurement through a density of 3.0logD involves a light level of 1000 times less than the normal light level that reaches the uncovered sensor.

To deal with such large dynamic ranges, there’s firstly the fortunate fact that the TCS34725 is a 16-bit device. This means it offers a 1:65535 contrast range just like that – at least in theory. In reality, this resolution will be a little less, and the final few bits tend to be iffy anyway – the difference between the final 1 or 0 will be a factor two, or 0.3logD, so a tiny deviation will end up looking like a big density difference.

So I use both sensor gain and integration time to squeeze out a little more performance. The TCS34725 is an integrating sensor, which means it essentially ‘counts’ the amount of light that hits it in a certain timeframe. It then applies a set amount of gain to this signal to raise sensitivity. This sensor allows integration times up to a little over 600ms and gain up to 60x. I let the device step through a few gain settings and integration times to find a suitable combination that allows sufficient sensitivity without being unnecessarily slow or imprecise. In practice, this yields apparently reasonable behavior beyond 3.0logD.

In the collection of stop-gap measures I’ve had to adopt, another one I had to figure out was how to calculate the densities from the measurements. The light sensor just gives absolute readings, but the kind of densities we’re used to are logarithmic. The STM32F103 is a very basic microcontroller with no hardware support for floating point math. Moreover, by the time I got to the math bit, I was also running out of program memory (later on, I optimized this and cut it back again by some 30%), so I didn’t even think of using a C library to calculate the logarithms.

Instead, I did it the crude way by (1) calculating how many factors of two there are between the base measurement and the actual density measurement. Each factor of two is a 0.301logD step. Calculating factors of two is easy; it’s a matter dividing both readings by each other and then shifting bits on the outcome until you’ve got nothing left. Count the bit shifts necessary, multiply by 0.301 (I actually used 301 so I could avoid floating point math) and that’s the lion’s share already.

I then (2) approximated the last bit through a simple lookup table. Take the remainder that’s left after determining the factors of two and find the closest value in a lookup table. The table needed to cover the range of 0.0logD to 0.301logD with satisfactory resolution is only a few hundred bytes in size, which I could spare. Crude, but it works quite well.

The housing is nothing special; it’s literally only the bare necessities to hold everything together and (sort of) in place. The main frame I print in two parts: a bottom part that the main PCB mounts onto, and a top part that’s glued to it and that forms the base for the sensor arm.

Two-part base frame. The left part is glued on top of the right part. The main PCB screws onto the small round mounting points inside the frame. The top part has holes for screw-fixing the components related to the sensor arm.

The sensor arm needs to hinge, which I solved by taking a random bit of copper wire and printing some mounts with lateral holes in it for the copper wire to fit in. I figured the whole thing would become rather wobbly and to keep the sensor align with the LED aperture, I printed a cross-beam arrangement that falls with narrow margins between two ‘wings’ that are mounted on the base frame. This way, the arm always lands in the same spot.

Sensor arm parts. The arm itself has a cross-beam that falls between two permanently mounted wings for proper alignment of the sensor. The sensor aperture is visible on the right; the sensor module screws into the square space at the end of the arm. The rounded bits on the left connect the alignment ‘wings’ with the arm hinge; these bits are mounted to the base frame. The arm swings on a piece of copper wire that threads through the arm and into the slots on either side.

Now for the million dollar question – does the whole thing work? Well, yes. Sort of. For the most part. As far as I can tell.

The first tests weren’t as encouraging as I had hoped. I have a handful of step wedges with (approximately) known densities, so I started doing some measurements on those to see what the device would read. One of the earlier tests looked like this:

Early test measurements taken with a T2115 step wedge. RGB channels shown in their own colors, black is the nominal wedge density that should have been read…

Uhm. Not so good. Well, it was promising in the sense that the readings did follow a general trend upwards, but that was about it…

So I fidgeted a little more with diffusors and masking out stray light. At some point, I was left with surprisingly high readings when trying to read very high densities, or even do readings without any exposure from the R, G or B LED. I then realized that light was actually falling through the vias on the light sensor PCB (a commercially available module), which I solved by 3D printing a simple cap that sits on top of the sensor module.

With some hardware interventions, I was able to get a decent starting point – although it wasn’t quite as linear as I had expected. I’m not sure where the remaining non-linearities stem from. They might have something to do with stray light and/or reflections, but I decided to try and solve it as well as possible in software by adding a linearization routine.

Readings after some hardware interventions to reduce stray light and add a little diffusion here and there. Not quite linear, and some spurious readings now and then, but hey – it’s something.
Analysis of errors on each channel and each density/light level

For the linearization, I analyzed the error for each channel/density combination and sought the easiest way out. A linear adjustment sadly didn’t quite cut it, so I ended up doing a second order polynomial adjustment with parameters tailored to each channel. This got me close enough to a Stouffer T2115 reference to call it a day (or rather, a couple of weeks):

Measurements against nominal for a T2115 reference after applying linearization algorithm

When connected to a computer with a USB-UART interface, the device conveniently spits out its measurement data to a terminal, which makes it a whole lot easier to analyse the results. Here’s an example of two measurements of different spots on an 8×10″ negative:

The device reports for each channel (R-G-B) the normalized 16-bit sensor data (quasi-raw; internally, it works with 32-bit data to better deal with a large dynamic range), logD density, the gain and integration time, and a timestamp.

The performance of the device is arguably not too good at low densities, between 0 and 1.0logD. Furthermore, I don’t know how well it works beyond 3.0logD, but I’m happy to call anything substantially more dense than that “bloody awful dense indeed” and leave it at that! Still, I was doing some carbon prints from fresh 8×10″ negatives today, and with this densitometer I could quickly determine a suitable starting point in exposure of the carbon tissue, and determine whether or not I needed to give these negatives a round of dichromate intensification. It works well enough for this, it seems, and I’m happy with that!

For assessing color negatives (not that I’m really planning to do this), I wouldn’t quite rely on this, yet. Honestly, I’m actually tempted to change my approach and skip the TCS34725 and instead build an analog sensor head myself. Maybe…maybe. It’s tempting. Well, who knows.

But for now, I think I should go and make some more prints. I think I’ve got the electronics bug out of my system…at least for the next few days (I hope!)

Leave a Reply

Your email address will not be published. Required fields are marked *