I’ve been using cheap & cheerful UV floodlights to expose my alternative process prints for a little over a year now. Since then, a few people have remarked that there’s a problem with these LED flood lights: their power output reduces as the unit operates due to the LEDs heating up. How bad is the problem, really, and what to do about it?
The current exposure unit I’m using is an array of four floodlights with a nominal power of something like 300W, but their actual power consumption is only around 75W each. Combined, this still makes the entire unit 300W, effectively. The type I’m using presently consists of 395nm LEDs with a smaller number of 365nm mixed in with them, so it’s effectively a dual-wavelength unit. For each 365nm LED, there are two 395nm ones in this particular type.
Sometime last year, I built a little timer unit for this setup. I was aware of the issue of output degradation as LEDs heat up, and with that in mind, I figured I would include a way to hook up a UV sensor to the device.
The sensor presently attached to the timer is built around a LAPIS ML8511 module, which has a peak sensitivity around 365nm. It’s an analog sensor, so I interfaced it with an ADS1115 ADC and the whole thing connects through a generic Ethernet cable to the timer, so it can be placed at a convenient spot during exposure – for instance on the edge of the contact printing frame.
If you’re wondering: yes, I’m running I2C over an Ethernet cable (an unshielded one, at that), and no, this isn’t good practice. It kind of works as long as the environment isn’t too noisy in terms of EMI, but it’s a compromise.
So far, I’ve used the exposure timer as just that – I set the time, the unit switches the light source and that’s all there is to it. I haven’t yet used it as an actual light integrator, even though I easily could. The functionality is already built into the software and it works. I guess I’m just stuck in my old ways…
Today, I figured I could at least use the sensor to get an impression of the output degradation of these LED floodlights I’m using. I was exposing a carbon transfer for 40 minutes, which is on the long side for my practice, but it did give me a nice opportunity to track the actual UV output power. So I placed the UV sensor on the contact printing frame and jotted down the measured UV output every 5 minutes.
The UV power is relative to the maximum range of the sensor/ADC combination in the way I programmed it, so the numbers don’t mean much. The only thing I do know is that the response of the sensor is fairly linear and not very temperature sensitive (see its datasheet), so this means that a ratiometric comparison of the numbers should be reasonably reliable. Here’s the plot over a course of 35 minutes:
This shows that the power output drops from about 15% down to about 10% after 15 minutes, so a reduction of a little over 30% in UV flux. A significant effect, indeed.
So what to do about this? So far, my ‘solution’ is to just accept it for what it is. It’s amusing and somewhat useful to know the magnitude of the effect, but honestly, I’m inclined to shrug and continue using the unit as it is. I could probably add a fan or something to limit the heating, but frankly, since the unit has been working fine for many months now, I probably won’t even bother. It’s cheap & cheerful as it is, so let’s keep it that way!
The only thing I might do is actually start using that exposure timer the way I intended it to: as an actual light integrator. In cases where I want to do test strips, for instance, it makes sense to be able to work in “UV units” instead of seconds, so that the transition from a stepped test strip to a full print is more consistent.