Straight ahead– the linearization game, part 2

Previously I wrote about the necessity of linearization: if you print an inkjet digital negative, what densities you can expect from the resulting carbon transfer print are a bit of a gamble. Put differently: the relationship between inkjet negative density and print density is not a linear one. To get color to work reasonably well, I’ll need to linearize my curves reasonably well, too. It’s a bit of a chore, but…well, no but, and not a ‘bit’ either. It’s just a chore. And there’s actually some preparations to be done before I can start with the…err, preparations. (I’m not sure when I’ll get to the actual printing, come to think of it!)

At the risk of trying to sound like Cato who ended each of his speeches for the Roman senate by remarking that Carthage really should be razed to the ground, let me complain a little more about GIMP. (Skip the following few paragraphs if you’re not interested in whining about open source software.)

A brief lament about GIMP’s bugginess

You see, GIMP is great. It really is: it’s very functional and it’s free to use. That combination alone makes it valuable. But it isn’t perfect, and sometimes (a bit too often) annoyingly so. For the whole linearization issue, I’ve been heavily involved in GIMP’s curves functionality, and I’m afraid it’s slightly buggy. There are two things I’ve run into at least with the version I’m currently using (2.10.32 revision 1):

1: If you define for instance an S-curve in GIMP with some fractional points (working in 16-bit mode), don’t expect that applying this curve to a greyscale step tablet will actually yield the results you programmed the curve to do. If, for instance, there’s a point at input 10.00 with an output of let’s say 7.50, and the remainder of the curve is actually curved, don’t expect a 10% grey patch to end up as 7.5% grey after applying the curve to it. For reasons that are beyond my understanding, GIMP will end up with something close to the value you’d expect, but not quite. Given the fact that small changes in curve shape can result in big density differences in the actual carbon print, this is somewhat annoying. Sorry, I have no solution for this. I guess we’ll have to accept it as it is.

2: In working with consecutive versions of compensation curves, I became familiar with the rather rudimentary and as it turns out, dysfunctional, curve preset management dialog in GIMP. There’s this little window (it’s literally little, so that longer curve names aren’t displayed entirely – quite annoying in itself) that you can use to remove curve presets, rename them or save them in a separate file. Starting with the latter: that button does nothing, apparently. That is to say, I’ve not been able to save a curve preset by clicking it – I think. There’s a possibility it’s saved in an undisclosed, for your eyes only (not mine) location.

That’s not the worst, though. It turns out that if you rename a curve preset, and some of the curve presets have long names with characters like “%” in them, there’s a chance that GIMP will not only rename the curve preset you selected, but also a seemingly random other one – or it will just delete a random other curve preset. W! T! F! This downright retarded problem sent me on a wild goose chase that probably cost me about a day’s work and I had to figure out a workaround to prevent the issue. I don’t think I have a waterproof solution, but for now, I just save my curve parameters in an Excel sheet so I can at least recreate them in case GIMP throws a hissy fit again.

It’s stupid stuff like this that keeps the open source movement from gaining real traction with the majority of end users. In my experience (as a former allround IT techie, no less), the risk of open source from an end-user perspective is spending way too much time on maintenance, bug fixing, troubleshooting and working around silly issues that hamper productivity. You think Adobe would get away with this kind of mess? I bet not! Alright, rant over. For now. And no, I’m still not buying Adobe’s extortion subscription scheme. But days like these do drive me closer to that kind of insanity.

Creating a baseline: figuring out ink density, sensitizer strength and exposure time

Story of my life, that caption. Joking aside, since I’m using GIMP, I can’t use ChartThrob to create compensation curves, so I had to DIY something. The principle remains the same, though. Start with some kind of series of greyscale patches, print a digital negative, print that as carbon transfer, measure the density on each patch and then create a compensation curve. In principle, it’s a one-iteration process to get the job done. In practice, it generally takes a bit more work, and of course it needs to be done for each particular carbon tissue formula. Since I’m currently using four (C, M, Y and K), I have to do this four times.

But first, I needed to do something else. Remember, this is carbon transfer, so it’s a matter of juggling all kinds of variables. There’s a few especially important ones here: the maximum density of the digital negative, the dichromate sensitizer concentration and the UV exposure intensity. Together, these determine the contrast range I can print in a carbon image and they also influence curve shape. The question arises: what kind of negative density, dichromate concentration and exposure would be appropriate? Dichromate concentration and negative density both influence contrast, so these can also be used to compensate for each other to a large extent. This suggests the choice of an optimal parameter set isn’t necessary very strict: there are probably several optimums, and it’s mostly a matter of picking a convenient one. So that’s what I did.

Greyscale bars negative design. The numbers indicate percentage transmission- or the inverse of percentage grey, if you will.

First I made the test chart above. It’s a simple bar pattern with the bars being spaced at 10% intervals in density. It’s intended to be printed as a negative; I did mine on 4×5″. The gradient bar on the far right I included as a visual aid and subjective reference; in practice, I don’t really use it and just rely on the bars. The resolution of course is limited, with only 10% increments instead of a more fine-grained approach. Maybe I’ll expand it one day to be more fine-grained, but for this exercise, I wanted something with fairly long bars and big patches to reduce the influence of any spurious defects in the prints. But I also chose bars (instead of squares) to enable the making of an incremental exposure test chart, like so:

Carbon transfer exposure step wedge using the bar chart above.

Here, I printed the bar chart shown above as an inkjet transparency on Fixxons transparency film – this happened to be the best transparency film I have on hand, so that’s what I used. I then made a carbon print with 15-second exposure increments. In the example above, the minimum exposure was 150 seconds (2.5 minutes) through 270 seconds (4.5 minutes). So that gave me a way to observe the influence of exposure time. But evidently, I wanted to assess two other parameters as well, so I didn’t just do the test above.

I also varied maximum negative density. I did this by varying the ink density, which the Epson driver allows you to set between -50% and +50% (default is 0). I only tested 0% and +50% – for carbon transfer, you generally want pretty much all the density you can get, so I ignored any negative ink density settings. The drawback of higher ink densities is that the pizza wheel problems inherent to desktop Epson printers tends to be emphasized, so I wanted to give a +0% density a go as well.

Thirdly, I varied dichromate sensitizer concentration. This influences contrast, but also exposure time. Lower dichromate concentrations create higher contrast, but also require longer exposure times to achieve the same print densities. Conversely, high sensitizer concentrations require long-scaled negatives with a high maximum density, as otherwise it’s impossible to print a pure white.

About pigment and the horrible reality how carbon printing is fatally flawed

Now, sensitizer concentration, since it influences contrast, also interacts with pigment concentration in the tissue. After all, a higher pigment concentration creates higher contrast. Actually, this is probably the most effective / strongest way to manipulate carbon transfer contrast – accepting that you have to pour new tissues to make a change. So manipulating image contrast involves (among others) the density range of the negative (and thus, ink density), sensitizer concentration and tissue pigment loading.

Since with inkjet negatives, I am limited in terms of maximum negative density (there is only so much ink the printer can deposit onto a transparency), there is a pretty hard cap to the maximum sensitizer concentration I can use. If I would go any higher, I wouldn’t be able to print a full contrast range image anymore, because no inkjet negative would be sufficient in contrast to print it. Now, it turns out that with the Fixxons transparencies and the Inkjetmall pigment inks I’m using for the printer, this limitation isn’t much of a problem – good news!

That still leaves the balance between pigment concentration and sensitizer concentration to be worked out. Why not go easy on everything and just use a high pigment concentration (and thus, a high inherent contrast of the tissue) so that there’s more wiggle room in the negative and sensitizer parts? Well, we then hit another wall, and this one is particularly nasty – and is in fact one of the handful of things that might just put an end to this entire project…

Carbon transfer is a really beautiful process, but it has one very serious flaw: it’s fundamentally impossible to print a continuous tone gradient that goes all the way from high density to paper white. It’s that last bit, close to pure white, where things will always break down. How badly they break down, depends on several factors, but they will always break down at some point. The problem is in the nature of carbon transfer itself: it’s a layer of gelatin with pigment embedded into it, and density is created by the thickness of the gelatin/pigment layer. Consequently, very low densities (very close to white) are in reality very flimsy gelatin layers. And those are also very, very delicate. In fact, they are so delicate, that at some point, they just don’t survive the warm water bath development step and wash away.

The result of this is that carbon transfer always has a distinct step between pure paper white and the first printable density that will survive print development. There are several ways to mitigate this issue. One is to not work with thin gelatin layers at all, and use a halftone screen approach in which only high-density dots (but very tiny, and lots of them) are printed. This is the approach Calvin Grier uses. It works really well, but puts a high strain on the quality of the halftone negative. With just a simple inkjet printer at my disposal, this is not really a very realistic option (although I might give it a go at some point). Another solution is to print several layers instead of just one; for instance one high-density layer that covers the dark tones, and a low-density layer that cannot by itself produce a true black, but that covers the higher tones.

The latter solution is actually the key, because such a low-density layer in fact is a low-pigment (and hence, low-contrast) tissue. The principle is simple: to make the same density with a low pigment load, you need a thicker gelatin layer. This will adhere more easily, so the problem with the breakdown of the image at tones close to white shifts a bit towards white, allowing us to print a lighter tone than with a highly pigmented tissue. Since I don’t (not yet, at least) want to print each of the four colors in one image with several layers (others do it this way; again, Grier is one of them), I aim to keep pigment loads on the low side. While this limits the maximum density I can print with the tissue, for color, I figure that very high densities for each tissue aren’t really necessary as long as they can produce a high density together.

For the reasons discussed above, I try to keep pigment load fairly low, and this puts a limit to how high of a sensitizer concentration I can get away with. After all, I still need to print a decent contrast range, and a very low-contrast tissue (low pigment) combined with a low-contrast sensitizer eventually will become problematic.

Establishing the basic parameters: steps of steps

So I’ve got these three parameters I want to experiment with: ink density in the negative, sensitizer strength and exposure time. I’m going to keep pigment concentration constant and on the low side – and I will have to figure that one out for each color anyway. I’ll come back to that later…one day. Promise! But there’s a story to that one, as well.

The problem with assaying three parameters at the same time, is that of the Cartesian product. There’s just a lot of combinations to work with – if I were to try, say, 8 exposure times, 4 sensitizer concentrations and 5 ink densities, I would somehow have to conduct 8 x 4 x 5 = 160 tests. This is a problem, because I am fundamentally lazy. So no 160 tests for me.

The quick & dirty solution is to reduce variability on each parameter. For that reason, I only tested +0% and +50% ink density on the negative, and only an 8% and a 16% dichromate concentration for the sensitizer. I did allow for some more granularity in exposure time, using 9 steps, bringing the total number of tests down to 2 x 2 x 9 = 36 trials. And by making step wedges, I could cram all 36 individual trials onto 4 prints. After all, I had only 2 x 2 = 4 combinations of sensitizer and inkjet negative density to do.

Inkjet ink density, sensitizer strength and exposure time tests

The result of these tests is shown above: on the left, 8% dichromate concentration and 16% on the right. At the top, inkjet ink density +0%, at the bottom +50%. Exposure times in seconds are indicated on the right-hand side of each step chart.

The dichromate concentration requires a little explanation, because those of you who are familiar with carbon transfer printing are probably surprised at the fairly high concentrations. While 8% isn’t abnormally high, 16% isn’t exactly common. But this is an artefact of the way I sensitize my tissues. For this, I use a small paint roller; a 6cm width roller for my 4×5″ (more like 5×7″ actual surface area) tissues and a 10cm width roller for 8×10″ (8.5×11″ or so) tissues. I load these rollers with a minimum of sensitizing solution; for a 4×5″ tissue, I typically use 1 ml of dichromate solution and a few ml’s of ethanol. This still works well in the sense that it gives good evenness, and the low water content makes for shorter drying times of the tissue.

So if someone else would use 4ml of dichromate solution, they would use a 2% or a 4% solution instead of my 8% resp. 16% solutions. Note that I use ammonium dichromate, which is more soluble in water and especially in ethanol than potassium dichromate, which would drop out of solution with my working methods.

Choosing a working point: eliminating the options

Using the stepped charts above, I could pick a suitable combination of inkjet ink density, sensitizer and exposure time. But which one to pick? As said before, I think there are many ways to skin a cat. Frankly, I expected that some of the longer exposure times would have resulted in problems with the whites not remaining white, but on the Fixxons transparencies, the inkjet ink turns out to be pretty darn opaque for all intents and purposes. I discarded any combinations that didn’t produce a pure white – these were only very few, at the longest exposure times of the 16% sensitizer tests.

I figured that the optimum would be determined by (1) making optimal use of the contrast range the negative has to offer, and that (2) the optimum would be a curve shape that requires the least amount of tailoring to create a linear output.

To tackle criterion #1, I looked at the total range of steps that was printed on each line, and in particular the differentiation in the high densities (shadows). To put it simply: I essentially counted the visually distinct steps on each line, and the highest number of steps wins. You can see in the test charts that there were still several combinations on each print that would meet that criterion, so that narrowed the choice down, but didn’t result in a firm conclusion.

I then looked at the white side and figured (criterion #2) that either a very large or a very small density difference between white and the first visible density would not be optimal. The reasoning behind this is that if such a density step is either very large or very small, it takes a fairly extreme curve adjustment to linearize that part of the curve. Since I was already aware of the highlight problem of carbon prints (see above) and earlier experiences with linearizing digital negative curves taught me that sometimes pretty extreme compensations are required at the extremes of the tonal scale, I specifically looked for this aspect. Based on this, combinations such as 8% sensitizer & +50% ink density (botton left print) with 330 seconds would probably not work optimally, because the first visible step with density is quite close to paper white.

Ultimately, I ended up selecting 16% sensitizer concentration, +50% ink density (so bottom right print) and 195 second exposure time as my starting point. It might have been a couple of other options, but I went for this combination mostly because it afforded relatively short exposure times (which is convenient, me being of the not overly patient persuasion). Moreover, from my experience with carbon printing so far, I have the rather subjective and intuitive impression that higher sensitizer concentrations tend to produce prints with a more smooth and even tonality, so I expect the least extreme compensation curves on this side of the spectrum. I admit, it’s a bit of a shaky argumentation, but more objective reasoning didn’t help in the final round of elimination.

Concluding remarks of unfinished business

Alright, to cut a long story short, I’ve got my basic parameters: I’ll print my inkjet negatives with +50% ink density on Fixxons transparencies, sensitize my 4×5″ tissues with 1ml of a 16% ammonium dichromate sensitizer and expose them for 195 seconds.

Well, that final parameter didn’t really work out exactly like that – or did it? It happens that in the meantime…yes, actually smack in the middle of the linearization process!…I decided to put together the LED UV light source I wrote about earlier this week. While establishing the parameters I discussed in this blog, I used a single LED panel at short distance (ca. 11cm), so that’s what the 195 second exposure time was based on. I then started the linearization process (which follows, I promise!) with that setup, but changed to the compound 4-panel light source midway. And yes, that did seem to do something with the contrast curve as well, because I also increased the distance between the light source and the print. I did this in turn to prepare for larger prints in the future, so that I wouldn’t have to re-do part of the linearization process when scaling up. Yeah, I know, it’s a mess.

And there’s many loose ends. One is that pigment concentration, for instance. For the tests in this blog, I used a tissue loaded with 1% Talens India ink – my standard B&W carbon tissue. For color work, it’s evidently not going to be India ink mostly, but the kind of pigments / paints I wrote about before. This brings the question if determining the three parameters central to this blog would have worked out if I had done it separately for the cyan/blue, magenta and yellow paints I’m using. I bet it would, and I bet I would have ended up choosing a slightly different combination every time. But I purposefully ignored this complexity, because I felt it would have confused matters too much. Using different sensitizer strengths and exposure times depending on the color being printed – it sure is possible, but the risk of small mistakes multiplies, and that would make the process impractical and very, very frustrating. So I admit to cutting corners – not just a little, but a lot!

And finally, as said before, this is really unfinished business. This blog, like the previous one, has linearization in its title, and I haven’t even touched that yet. So there’s more to come. In the next part, I will really get to the point. I think.

Leave a Reply

Your email address will not be published. Required fields are marked *