This is the first of a series of papers on the subject trying to unify in a coherent way how color management works from the ground up from the perspective of a user and given that our beloved gamma function spreads all over the place and has a huge influence in all aspects of computer graphics i decided it was time to try to develop the subject as the documentation I have found on the internet is so bad and confusing.

Inspired by the book Digital Compositing for Film and Video by Steve Write and his superb explanation of the subject I tried to synthesise it as the first building block of the rest of the papers on color management.

First things first is to introduce the actual situation where all TV sets and monitors in the world display the same behaviour which is a happy accident due to the physics of the catode ray tube or known also as CRT technology.


Let’s assume your TV set receives an image from your favourite broadcaster, the way TVs do “paint” the picture on the screen is by exciting a later of phosphors on the screen and by manipulating the electric current excite these phosphors.

As you can imagine the converting volts to luminosity is the name of the game and without entering on the definition of voltage or the likes the key element is that the response is not a linear function, meaning that by increasing voltage 50% you won’t get 50% more luminosity but instead responds to a power function.


So you should remember that your TV set displays images way darker than they really are so somewhere in the production of those images someone has to compensate for this loss of luminosity by actually forcing the images when they are (for example) captured by the cameras of the TV studio.

In the world of the internet and photography this same process is applied directly by your shinny camera by embedding on the image after it has been stored by the sensor a sRGB curve that compensates for the future loss of luminosity.


Which is why all your photos from your camera look good on your TV without doing something and the whole reason of inventing this trick or procedure, they apply the inverse of the monitor response curve to get the original image you are expecting to see.


We we for example look at this photo I took of one of my son’s toys and remove the embedded sRGB profile this is how it looks on my screen, way too dark.


Remember the image really would look perfect if the monitor was not darkening it, the data really is correct, we just need to compensate for the monitor behavior by applying the sRGB profile so looks like i expect, even at the expense of having the wrong data, like in this image.


So when talking about computer graphics when you are generating an image the process that that image follows would be; you generate the image, this image is stored in the memory of your graphics card (framebuffer) and then sent to your monitor which actually darkens it. Therefore we have to add a conversion step manually which is to brighten our image before it gets to the screen, that is the frame buffer or when we generate it, by either appliying a gamma in the framebuffer or in the actual generation stage.

As you can imagine both solutions would do but you actually want to use a gamma approach on the frame buffer so you don’t destroy your image given that you want your image to be linear all the way through.

Why linear?

Well, there are many reasons but the most important is that light behaves linearly and therefore given that we manipulate light we want the maths to be correct, which leads to tons of benefits, if you don’t the list of problems is just too long for this article.


So far so good, so where does the gamma enter the picture? It really already has given that the response curve of a CRT monitor can be roughly set to be x to the power of 2.2 and the sRGB can be set roughly to 1/2.2 which is the inverse of the CRT curve.

Now when you touch your software, nuke for example, your gamma node will have a behaviour that displays a weird result, meaning when you apply a “gamma correction” of 2.2 the image becomes brighter, so the engineers that developed nuke’s gamma node really are exposing the denominator of the power function.


Therefore speaking in the context of computer graphics software you really are talking about gamma correction, meaning you affect the denominator of the power function.


Which in fact explains the behavior and the actual maths too from the point of view of your software.

So what does all this mean, well, given we want to work in linear due to the intrinsic nature of light we have to take care all our images that we are going to use are set up in linear mode so the mathematical models in your render engine and compositing software work as they were designed to do, and given your output images should also be set up in linear you have to consistently apply gamma correction on your display to compensate for your monitor given that as I showed it would otherwise darken your images.

The temptation is to actually “burn” the gamma correction on the images so they look right without further need for other actions but this destroys the whole point of the operation and you will end up in La La Land where 2+2=10 and your images will look wrong without any possibility of making them straight.

Sure enough many of us with years of experience working on non-linear workflows could argue that that is not true but I can guarantee you that your opinion is biased and this is a fact nobody can escape, light behaves in a linear fashion, the CRT monitor in front of you does not.

Leave a Reply