Written by
Bram Stout
Copyright © 2022 Bram Stout Productions – This work is licensed under CC BY 4.0. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/
Colour management can be a daunting thing. Some use a system for it, some do it manually, and some pretend it doesn’t even exist. No matter how daunting, complicated, or varied the methodologies may be, having a proper colour management workflow is still very much important. If done properly, it becomes almost transparent. Its technical guts locked up and hidden away, allowing you to focus on the creative parts. But, not everyone is able to make a good workflow and when working together on a project, everyone’s workflow must be compatible with each other. And so, ACES was born and alongside it, the ACES Reference Rendering Transform (RRT). But just like all things, it’s not perfect.
Most artists that have used the ACES RRT, will probably be aware of the problems of the RRT and I haven’t been blind to them either. Since I started using ACES, I have been looking into the RRT, trying to make improvements to it. I must advise you though, these improvements are based on my subjective preferences. Colour renderings are creative manipulations of colours and inherently subjective. Different projects require different renderings, so there is no one rendering that fits all. These improvements are what I seek for my projects, but they may not be right for yours. You may even revolt at the improvements. The problems that I identify in this web book, could be things that you don’t even see as problems. However, my hope is that with this web book, you will be able to identify and make your own improvements and achieve a rendering that works best for your projects.
The software applications that I use all make use of OpenColorIO and the ACES implementation that I use is in the form of an OpenColorIO configuration. Because of that, the improvements that I’ve made will also be implemented in the OpenColorIO configuration. Luckily, the same improvements can also easily be used in other tools than OpenColorIO. And, some or all of the improvements can also be applicable to other colour renderings.
For the purposes of evaluating the RRT, I’ve created an image containing multiple gradients with increasing exposure. The saturation for the gradients are all 100%.
You’ll probably already be able to see some of the issues, but the image above makes use of the sRGB Output Display Transform and therefore is not a true representation of the RRT. The RRT uses the wider gamut AP1 for the rendering. So to properly evaluate the RRT, we should make use of the AP1 gamut.
This is a more representative image of the RRT, and when comparing it with the previous image, we can identify our first issue. The conversion to a smaller display gamut gives unsatisfying results for highly saturated colours.
In the image, as the colours get brighter and brighter, at some point, they start to become white. This is its “path to white” and is something that I like. However, we can see that, at what point it starts to go to white, is not consistent. Blue barely goes to white. Additionally, even though the bottom of the gradients are over 10 stops above middle grey (a linear value of over 200.0 or 20,000 nits), none of the hues have reached full white, although some do get very close. Blue isn’t even halfway to white. So, the path to white is inconsistent for every hue and does not happen fast enough.
Even though the AP1 gamut that the RRT uses is a wide gamut, there are wider gamuts and visible colours that are outside of the AP1 gamut, so how does the RRT handle those? Here is the same image, but with the colours oversaturated so that they lie outside of the AP1 gamut.
Personally, I like the look of photographic film and the ACES RRT is modelled, in part, by it. But, I want to go a bit further. However, there is no common look of photographic film. Each film stock has its own rendering, which looks different from the other. So, saying that I want it to look “film-y”, is quite meaningless. Instead, let’s look at a few stills from movies where I like the rendering of the film stock that they used.
Take a look at the saturation of her jacket, finger nails, and scarf. They are all very saturated. There aren’t that many muted colours. It’s either grey or vibrant.
Again, we can see that his red shirt is very saturated, but his skin tone is as well. The added saturation, in my opinion, makes skin look much more alive. But, there is another thing next to the increased saturation, the shadows of saturated colours. Even though his shirt has quite a bit of shadow on them, they aren’t that dark. It’s as if the saturated colours get a boost in intensity as well.
Again, note the high amount of saturation which creates a nice separation, but also the green foliage in the bottom centre of the image. Even though we are looking at the underside, the shadows aren’t that dark.
In here you can see that the blood is almost a solid colour, barely any shadows. Even though the blood is illuminated by a very blue light, it still renders a nice red, giving a lot of separation (whether that is due to the rendering or a colour grade, I don’t know, but the increased saturation will definitely help). So, the colours should be more saturated and the saturated colours should get a boost in intensity.
A common complaint about the ACES RRT is also the amount of contrast. By default, the RRT has 6.5 stops below middle grey and 6.5 stops above middle grey, giving us a total dynamic range of 13 stops, which is roughly equivalent to film prints. However, for many people including me that is a bit too much contrast. My preferred contrast gives me 7.5 stops below middle grey and 7.5 stops above middle grey, for a total of 15 stops of dynamic range. So, the contrast should be reduced.
In my research into colour rendering and the ACES RRT, I’ve come across something that some call “the notorious six.” It’s in nearly every image, including all of the images shown in this web book so far. It’s claimed to be a bad thing that should be avoided, yet it’s everywhere.
But the thing is, I like it. Many digital colour renderings have this and photographic film has this hue skew as well.
Even though the light is blue, the brightest parts become cyan, just as described by the notorious six.
There is a quote from Geoff Boyle that I think fits here very well:
“I don’t get paid to make accurate pictures, I get paid to make pretty pictures.” – Geoff Boyle Source
For some projects, pretty means accurate, but for many projects pretty means a heightened metaphorical version of reality that emphasises the story. The hue skews described by the notorious six happen due to a colour getting brighter and brighter. Skewing the hue towards the secondaries (yellow, cyan, and magenta) makes the colours appear even brighter, since the perceived brightness of the secondaries are higher than the perceived brightness of the primaries (red, green, and blue). Even though these hue skews are merely a byproduct of the kind of rendering used, they do serve a purpose. They heighten the perceived brightness of already bright colours.
Colour rendering and what is pretty are subjective things, and they are different for each project. For some projects, these hue skews aren’t appropriate, but for other projects they are. The projects that I do are an abstract and heightened version of reality, and I do seek a rendering that reflects that. Therefore I embrace the notorious six and refute the idea that they are a bad thing and that there is never any room for them.
When colours lie outside of the rendering gamut used by the RRT, we lose gradations in the hue. So, I want to make sure that all of the colours are inside the AP1 gamut. There are many ways to go about this, but the easiest that I can think of, is to convert the colour to HSV and to then clip the saturation to a maximum value of 1. On paper it shouldn’t be the best method, since the clip in saturation would make us lose gradations in the saturation. In practice, however, it produces results that are perfectly fine.
The above image is how the RRT handles colours outside of the AP1 gamut.
While this image is with a simple clipping of the saturation. The hues no longer collapse into the primaries. A roll-off of the saturation could provide better results, but most real world colours already fall within the AP1 gamut. For the colours that lie outside of the AP1 gamut, combined with the rest of the rendering and the conversion to a smaller display gamut, any gradations in saturation would most likely become negligible.
Lowering the contrast is conceptually very simple. Practically every tool, including OpenColorIO, provides a contrast operator that we can use. However, we still need to make sure that the contrast is applied properly, which would mean that we’d, at the very least, have to have a general understanding of how contrast works. The ACES RRT is inspired by photographic film, and so are all of the contrast operators that can be found in many of the tools used in colour grading.
With photographic film, we can plot the amount of light hitting the film against the recorded intensity of the light. The result is called a characteristics curve. A very important thing to note, is that the exposure (the amount of light hitting the film) is logarithmic and density is logarithmic as well.
The bottom of the curve (the shadows of the image) is a flat line, which bends into a straight line in the middle of the curve. At the top of the curve, that straight line flattens out and we hit the peak of the amount of light that the photographic film can capture.
The slope of the linear segment in the middle, determines the contrast. The gentler the slope, the lower the contrast. Since both axes are logarithmic, the slope of the linear segment is called the gamma (not to be confused with gamma in digital colour spaces).
So, if I want to change the contrast, I need to change that slope, but I still want the centre of the line to stay where it is, so I need to change the slope around a pivot. The general formula to change a slope around some pivot is the following:
y = (x – pivot) * contrast + pivot
That is something that I can easily implement and this is also the formula used by many colour grading applications. However, there is one important thing: This formula should be done in logarithmic space, just like how the characteristics curve is logarithmic as well. Luckily, thanks to maths, the formula can also be rewritten to work in a linear space:
y = (x / pivot)contrast * pivot
In theory, these two formulas should produce the same results, but in practice I’ve found that, when using the logarithmic variant, the black levels go up when reducing the contrast. Therefore, I’ll be going with the linear variant.
Here is the ACES RRT again.
And here is the ACES RRT with a reduced contrast. I went for a contrast value of 0.85 and a linear pivot of 0.18 (middle grey). It doesn’t show much improvements in these sorts of images, but later on in this web book, I’ll provide some more images that should better highlight the effect of the lower contrast.
A big issue is the path-to-white of the saturated colours. When their exposures increases, they don’t all turn to white at the same time and not quickly enough. The blue primary barely starts to go to white. In order to figure out how to create a satisfying path-to-white, it is important to find out how it works.
A pure blue colour would have the RGB values of (0,0,1). If I increase the exposure of the colour making it four times brighter, I’d get the RGB values of (0,0,4). The red and green values remain zero. Even if I increase the exposure by a factor of one thousand, the RGB values would be (0,0,1000) and the red and green values still remain zero. If the saturation of the blue colour becomes 99% instead of 100%, then the RGB values will be (0.01,0.01,1). Increasing the exposure by a factor of four yields (0.04, 0.04, 1), so the red and green values are increasing now. In order to get all of the values above one, I’d have to increase the exposure by one hundred times. If the saturation is 90% instead, the RGB values will be (0.1,0.1,1) and the exposure would only have to be increased by a factor of ten, in order to get all of the values above one. From this, I can conclude that a colour of 100% saturation will never reach white, no matter how bright it gets, and the lower the saturation the quicker a colour reaches white.
The creators of ACES have realised this as well. They have added in a desaturation operator which is why the pure colours in the test image still go towards white. However, the desaturation isn’t applied evenly and isn’t strong enough, causing the colours to not reach pure white at realistic exposure values and the blues to barely even start on their path-to-white. It’s better than nothing, but it can definitely be better.
The ACES RRT uses a matrix operation to apply the desaturation, allowing them to control it per primary, but it applies the same amount of desaturation regardless of luminance. The tone mapping in the RRT puts full white at a value of roughly 16.0, which means that in order to get a colour to reach full white, it needs to get so bright that all values are above 16.0. If pure colours should reach full white at an intensity of 100.0 (roughly 9 stops above middle grey or 10,000 nits), then those colours should have a maximum saturation of 84%.
In the image above, all of the colours have been set to a saturation of 84% and it can clearly be seen that they now all reach white, but the colours are incredibly dull. Even in the smaller sRGB or Rec.709 gamut, these colours won’t reach close to 100% saturation. I don’t want this. The colours should remain vibrant, yet reach full white quickly. At low luminance values, the saturation should remain high, and at high luminance values, the saturation should go down.
In the image above, I simply started reducing the saturation based on the value of the colour. Starting at a value of 1.0, the saturation goes down until it reaches a value of 64.0, at which point the saturation is reduced by a factor of 0.75. This is close, but I want the path-to-white to start earlier.
I want a more saturated look and for saturated colours to get a boost in intensity. Traditionally saturation is increased by just multiplying the saturation value by some number, but this can cause the saturation to go beyond 100%, which I want to prevent. So, a simple multiply won’t do. Instead, I want the saturation to move towards 100% without going above it. The following formula does what I want:
y = 1.0 – (1.0 – x)saturation_boost
I’ll be going for a saturation boost of 1.4
The image above is the RRT with the current improvements but without the boost in saturation.
This is with the saturation boost. The reduction in contrast and the desaturation based on luminance both reduce overall saturation in the image, but the saturation boost compensates for this and even increases saturation further. So, compared to the default ACES RRT, the image has a lower contrast, but higher saturation.
Next to a general boost in saturation, I also want to increase the intensity of saturated colours. However, I don’t want to just boost saturated colours, I want to give dark saturated colours an extra boost. This means that the formula is split up in two parts, the exposure boost and the saturation attenuation. The formula looks like the following:
exposure_boost = (10-x + 10-10 * x) * 0.18 + 0.25
saturation_attenuation(saturation) = saturation2
combined = exposure_boost * saturation_attenuation(saturation) + 1.0
So far, the images shown here have been using the AP1 gamut, which the RRT uses to do the rendering in. However, the vast majority of computer monitors are sRGB, which is a smaller gamut and thus the colours shown in the images are technically incorrect. We should be viewing it in the sRGB colour space.
Quite a few of the colours that are within the AP1 gamut, lie outside of the sRGB gamut, which results in colours with a saturation of above 100%. In the image, it can be identified by sections of the gradients where there are no gradations anymore. A simple solution would be to convert the image into HSV and then remap the saturation to stay within a range between 0% and 100%.
It’s a very slight improvement, but there are still two things that are wrong with it. The primaries have slightly changed their colours and, especially in the purples, gradations are still lost. When inspecting the RGB values, there are places where they go above 1.0. When converting from one gamut to another, the luminance of a colour is preserved, but since a more saturated colour has a lower luminance, its intensity must be increased in order to compensate for the loss in luminance. This causes values to go below 0.0 and above 1.0. With the saturation mapping, the values below 0.0 have been removed, but there are still values above 1.0. When reducing the saturation of a colour, its luminance increases, therefore the saturation mapping increases the luminance of out of gamut colours. By correcting for these luminance changes, all values will go back to the range 0.0 to 1.0.
Now all of the gradations are preserved, although in the magenta’s it’s still a bit difficult to make up gradations, they are actually there.
There is still one thing, the hues of the primaries. The red is a tiny bit magenta, the green a tiny bit cyan.
In this CIE1931xy chromaticity diagram, the AP1 and sRGB gamuts have been outlined, together with a few lines showing what the AP1 primaries get converted to when saturation mapping is used. This clearly shows why those primaries have their hues changed. For the green primary, this should be expected and wanted, but for the red primary, it is supposed to be an actual red. There shouldn’t be any blue light added when converted into a smaller display gamut. A similar thing is for the blue primary. Luckily, during the saturation mapping, we can identify these scenarios and change the hue to compensate for this.
Below are a bunch of test images with both the default ACES RRT and the ACES RRT with my improvements. These images have been gathered from various places, and so I’ll put in a link to the place where I’ve found them. I feel comfortable putting these images here, since this web book is purely for sharing knowledge. If anyone has any objections to the use of these images, please let me know and I’ll remove them from here.
As I’ve written in the introduction for this web book, I am using OpenColorIO and so the implementation of my improvements will be done in an OpenColorIO configuration. What’s in this chapter will still mostly apply to other tools as well. The configuration will be open source.
One important thing about OpenColorIO, is that I cannot just copy and paste my formulas into the configuration. I will have to convert most of them into lookup tables (LUTs). The downside of LUTs is that there is a loss of precision. The upside is the portability, since practically every program can read and apply LUTs.
The LUTs are created using Python scripts and I am using the Cube format. For 1D-LUTs I use the following template:
import math
LUT_SIZE = 4096
BOUNDS_MIN = 0.0
BOUNDS_MAX = 1.0
BOUNDS_SIZE = BOUNDS_MAX - BOUNDS_MIN
def lutFunc(val):
return val
with open("luts/lut.cube", "w") as fLut:
fLut.write("TITLE \"lut\"\n")
fLut.write("LUT_1D_SIZE " + str(LUT_SIZE) + "\n")
fLut.write("DOMAIN_MIN " + str(BOUNDS_MIN) + " " + str(BOUNDS_MIN) + " " + str(BOUNDS_MIN) + "\n")
fLut.write("DOMAIN_MAX " + str(BOUNDS_MAX) + " " + str(BOUNDS_MAX) + " " + str(BOUNDS_MAX) + "\n")
for i in range(LUT_SIZE):
val = float(i) / float(LUT_SIZE - 1) * BOUNDS_SIZE + BOUNDS_MIN
value = lutFunc(val)
fLut.write(str(value) + " " + str(value) + " " + str(value) + "\n")
And for 3D-LUTs I use the following template:
import math
LUT_SIZE = 65
BOUNDS_MIN = 0.0
BOUNDS_MAX = 1.0
BOUNDS_SIZE = BOUNDS_MAX - BOUNDS_MIN
def lutFunc(R, G, B):
return [R, G, B]
with open("luts/lut.cube", "w") as fLut:
fLut.write("TITLE \"lut\"\n")
fLut.write("LUT_3D_SIZE " + str(LUT_SIZE) + "\n")
fLut.write("DOMAIN_MIN " + str(BOUNDS_MIN) + " " + str(BOUNDS_MIN) + " " + str(BOUNDS_MIN) + "\n")
fLut.write("DOMAIN_MAX " + str(BOUNDS_MAX) + " " + str(BOUNDS_MAX) + " " + str(BOUNDS_MAX) + "\n")
for iB in range(LUT_SIZE):
for iG in range(LUT_SIZE):
for iR in range(LUT_SIZE):
R = float(iR) / float(LUT_SIZE - 1) * BOUNDS_SIZE + BOUNDS_MIN
G = float(iG) / float(LUT_SIZE - 1) * BOUNDS_SIZE + BOUNDS_MIN
B = float(iB) / float(LUT_SIZE - 1) * BOUNDS_SIZE + BOUNDS_MIN
value = lutFunc(R, G, B)
fLut.write(str(value[0]) + " " + str(value[1]) + " " + str(value[2]) + "\n")
The sample points in 3D-LUTs are spaced evenly, but there are cases where we want to have more sample points with the smaller values than with the higher values. In these cases, we make use of a shaper. A shaper is a transform (sometimes implemented as a 1D-LUT) which essentially morphs the input values into a distribution better suited for the 3D-LUT.
The input gamut mapping is a simple saturation clip. The colour is converted into HSV, using a built-in transform in OpenColorIO, and then the saturation is clipped to a maximum value of 1.0. Lastly, the colour is converted back into RGB. Preferably the saturation would have a roll-off instead of being clipped and the change in luminance resulting from this would be compensated for, but this would be impractical to implement in OpenColorIO. Next to that, I doubt that there would be any meaningful difference in at least 99.99% of the cases. This saturation clipping is done with the following transforms, after the image has been converted into the rendering space.
- !<FixedFunctionTransform> {style: RGB_TO_HSV}
- !<FileTransform> {src: saturation_mapping.cube, interpolation: linear}
- !<FixedFunctionTransform> {style: RGB_TO_HSV, direction: inverse}
OpenColorIO has a transform which can be used to clip values, but unfortunately it cannot be used to only clip the saturation. Therefore, the only actual solution is to use a 1D-LUT which does nothing with the hue and value, but simply clips the saturation to a maximum of 1.0.
The contrast is done using linear variant of the contrast formula. It would have also been possible to convert the colour into a log space and then use the log variant of the contrast formula, but this has the side effect of raising the black level, which I find undesirable. Therefore, I am using the following transforms to reduce the contrast.
- !<CDLTransform> {slope: [0.18, 0.18, 0.18], direction: inverse}
- !<CDLTransform> {power: [0.85, 0.85, 0.85]}
- !<CDLTransform> {slope: [0.18, 0.18, 0.18]}
The slope specifies the pivot in linear space for the contrast. 0.18 corresponds to 18% middle grey. The power specifies the contrast factor, which is the same as the contrast factor used in grading applications like Davinci Resolve.
The saturation and path-to-white modifications are combined into one 3D-LUT, since I see no need to separate these two. This is basically just an implementation of the formulas described previously. The code to generate the LUT looks like this (just the relevant part):
# Corresponds to the V in HSV
Cmax = max(R, max(G, B))
if Cmax > 0:
# Normalise the values to the range 0.0 - 1.0
R /= Cmax
G /= Cmax
B /= Cmax
# Inverse it
R = 1.0 - R
G = 1.0 - G
B = 1.0 - B
# We can now calculate the saturation.
# We do clip it to a maximum of 1, because
# of some imprecision due to LUTs.
S = min(max(R, max(G, B)),1.0)
# Boost the saturation.
# The 1.4 specifies by how much to boost the saturation.
# Higher values saturate more.
newS = 1.0 - math.pow(1.0 - S, 1.4)
if S > 0.0:
Sfactor = newS / S
R *= Sfactor
G *= Sfactor
B *= Sfactor
# Desaturate based on luminance
# The 7.0 power can be changed to determine
# how quickly it desaturates. A lower number
# desaturates slower.
desat_factor = Cmax * (-0.25 / 128.0) + 1.0
if desat_factor < 0.0:
desat_factor = 0.0
desat_factor = math.pow(desat_factor, 7.0)
R *= desat_factor
G *= desat_factor
B *= desat_factor
# Invert the values again
R = 1.0 - R
G = 1.0 - G
B = 1.0 - B
# Undo the normalisation
R *= Cmax
G *= Cmax
B *= Cmax
# Boost the exposure for very saturated colours
exposure_boost = (math.pow(10.0, -Cmax) + math.pow(10.0, -10.0 * Cmax)) * 0.18 + 0.25
saturation_attenuation = math.pow(newS, 2.0)
saturation_exposure_boost = exposure_boost * saturation_attenuation + 1.0
R *= saturation_exposure_boost
G *= saturation_exposure_boost
B *= saturation_exposure_boost
This is then used in the OpenColorIO configuration using the following transforms:
- !<LogAffineTransform> {base: 10.0, lin_side_offset: 0.001, lin_side_slope: 1.0, log_side_offset: 0.54, log_side_slope: 0.18}
- !<FileTransform> {src: look_bsp.cube}
- !<LogAffineTransform> {base: 10.0, lin_side_offset: 0.001, lin_side_slope: 1.0, log_side_offset: 0.54, log_side_slope: 0.18, direction: inverse}
The LogAffineTransforms are shaper transforms. This helps increase the precision of the 3D-LUT.
Lastly, the colours should be gamut mapped to the display gamut. Gamut clipping does not give satisfying results, and so I am using the method described previously in this web book. Display gamut mapping should be done in the display transform and not in the rendering transform. Unfortunately, due to how OpenColorIO works, it would be tedious to do this properly. Since I practically only produce content for on the web, which is either sRGB or Rec.709, I have decided to put the display gamut mapping in the rendering transform and have it limit the colours to the sRGB/Rec.709 gamut. When a display colour space is used with a wider gamut, the colours will still show properly, but I cannot make use of the more saturated colours that the wider gamut offers. It is possible to make multiple rendering transforms for different display gamuts to overcome this issue, although I have not done this.
The code for the gamut mapping looks like this:
Lo = luminance(R, G, B)
# Make sure that at least one of the values is above 0.0
# Those values would be pure black anyways, and this math
# doesn't like all negative values that much
if R <= 0.0 and G <= 0.0 and B <= 0.0:
return [0.0, 0.0, 0.0]
# Convert to HSV
hsv = toHSV(R, G, B)
H = hsv[0]
S = hsv[1]
V = hsv[2]
# saturation mapping
if S > 0.98:
S = (S - 0.98) / 0.02
S = S / (S + 1.0)
S = S * (0.02 / 0.91) + 0.98
S = min(S, 1.0)
# Fix the hue in order to make the reds and blues
# show in a more predictable manner.
if H >= 354.0 and (B < 0.00001 or G < 0.00001):
H = 0.0
if H >= 240.0 and R < 0.00001:
H = 240.0
# Convert back to RGB
rgb = toRGB(H, S, V)
R = rgb[0]
G = rgb[1]
B = rgb[2]
# Make sure that the luminance is still the same
L = luminance(R, G, B)
Lscale = Lo / L
R *= Lscale
G *= Lscale
B *= Lscale
# If one of the RGB values is still above 1.0,
# then scale all of the values so that the values
# do stay within 0.0 and 1.0.
maxC = max(max(R, G), B)
if maxC > 1.0:
R /= maxC
G /= maxC
B /= maxC
And in the OpenColorIO configuration, the 3D-LUT is used as follows:
- !<ColorSpaceTransform> {src: ACEScg, dst: scene-linear Rec.709-sRGB}
- !<FileTransform> {src: gamut_mapping_shaper.cube}
- !<FileTransform> {src: gamut_mapping.cube, interpolation: tetrahedral}
- !<FileTransform> {src: gamut_mapping_shaper.cube, direction: inverse}
- !<ColorSpaceTransform> {src: scene-linear Rec.709-sRGB, dst: ACES2065-1}
The colours are converted from the rendering space (ACEScg, which has the AP1 gamut) to a colour space with the sRGB/Rec.709 gamut, then the shaper is applied, after which the actual gamut mapping LUT, then the inverse of the shaper is applied, and lastly the colours are converted from the linear sRGB/Rec.709 colour space into a generic display colour space (ACES2065-1 in my case).
I have put all of these improvements into one OpenColorIO configuration. The Python scripts used to generate the LUTs, the pre-generated LUTs, and the configuration itself have been uploaded to a repository that you can find here: https://github.com/bram0101/BSP-ACES-OCIO-Config
Please feel free to use it and to use it as a starting point to create your own rendering.
When implementing my version of the RRT in the OpenColorIO configuration, I came up with some things that I’d wish OpenColorIO had or changed.
The RGB to HSV fixed function is a non-standard implementation. According to the comments in the code, they wanted to support extended ranges. This could be useful for user interfaces, but when doing some computational work, like saturation mapping, then the fixed function becomes useless. For the input gamut mapping, I used this fixed function since I needed to support HDR values, but I could only get decent results when doing saturation clipping. Saturation mapping won’t work properly and even with saturation clipping, the results are technically incorrect. I would prefer for this fixed function to use the standard implementation, with only a single alteration to support when all of the three RGB values are negative, which would result in a negative V value.
In OpenColorIO, the rendering transform is called a view transform and after the view transform, the display transform is done. The idea is that view transforms are independent of the display device and that then the display transform converts it into the right display colour space. This is how it should be, but the display colour spaces can also be used as input colour spaces as well. In principle, I have nothing against it. However, when applying the display transform after the view transform, then I’d want to use gamut mapping, but when using the display colour space as an input colour space, I don’t. Unfortunately, OpenColorIO does not provide a good way to go about this, and thus I ended up doing the display gamut mapping in the view transform. OpenColorIO has to_scene_reference and from_scene_reference for input colour spaces and to_display_reference and from_display_reference for display colour spaces. Even though display colour spaces could be used as input colour spaces, they cannot define the to_scene_reference and from_scene_reference transforms. Instead, OpenColorIO simply uses the to_display_reference and from_display_reference transforms together with the default view transform. Simply allowing display colour spaces to also define the scene reference transforms would solve this issue.
This one is an addition. The display gamut mapping that I developed is currently implemented using a LUT, however it would be great if it could be a fixed function available in OpenColorIO.
I have decided to create requests on the OpenColorIO GitHub page to add in these modifications. Here are the links:
RGB to HSV: https://github.com/AcademySoftwareFoundation/OpenColorIO/issues/1616
Scene reference: https://github.com/AcademySoftwareFoundation/OpenColorIO/issues/1617
Gamut mapping: https://github.com/AcademySoftwareFoundation/OpenColorIO/issues/1618
When looking at the end result, I am satisfied with my improvements and for now, I will leave it at this. It could definitely be better, though.
The ACES RRT still applies the desaturation matrix, which I have essentially replaced with the luminance-dependent desaturation. It would be better if that matrix wouldn’t be applied. But the question is also, whether I should completely replace the ACES RRT. Should I use a different tone mapping or a different rendering gamut? Maybe.
The goal for this web book is at least completed. I hope this does give you some insight into the ACES RRT and enough resources to start making your own modifications to it.
The following are my ramblings on the next version of ACES and its RRT. They’re not fully thought through, but I wanted to get them out there and I think this is the best place for it. You don’t have to keep reading if you don’t want to.
I have occasionally lurked on the ACES central forums, where they are working on the new RRT for ACES 2.0, and I have opinions. Now, I should voice these opinions on there, but I’d rather say them once here, than spend hours every day arguing on a forum.
The current version of ACES provides a single rendering, the RRT. The current plans for the next version of ACES will also only provide a single rendering. This is where I believe they are already going wrong.
The thing about colour renderings is that they are creative manipulations of light. When a painter is painting a landscape, the light from that landscape hits their eyes and the painter then figures out which ratios of paint are needed to represent the colour of that light. The painter is doing a colour rendering and every painter is going to do it differently. Each film stock renders an image differently as well, and cinematographers choose different ones based on their rendering. Choosing which colour rendering to use, is a creative task in the telling of a story and every story needs a different rendering. You cannot force the same rendering onto every single project.
For the next version of ACES, they are looking into a “chromaticity-linear” rendering. As described in the chapter about the notorious six, most renderings used today have hue skews when a colour gets brighter. The idea of a chromaticity-linear rendering is that the hue remains perceptually the same, mimicking what our eyes would perceive. The chromaticity-linear rendering should be neutral and accurately represent a scene.
Such a neutral and accurate rendering might sound great, but do we really want this for every project? Many film stocks are far from neutral or accurate, yet many cinematographers still gravitate towards them, either to use them or to emulate them. Diffusion filters are put on cameras, yet our eyes don’t work like that. Anamorphic lenses are very common, but they are full of quirks and artefacts. Lighting in movies is often dramatic, rather than true to life. When telling stories, we aren’t showing reality. We are showing a heightened, abstract version of reality, specifically created for that story. So, if everything in storytelling is fake, then why should we be using a rendering that is supposed to be real?
The majority of movies and TV shows aren’t trying to be real, neutral, or accurate, so I doubt that many of their cinematographers would choose such a chromaticity-linear rendering and there are many stories where I believe such a rendering to be a bad decision from a storytelling perspective. But, there are also projects where a chromaticity-linear rendering is actually a must, where the image should be neutral and evoke real life. Again, different stories require different renderings.
Right now, with the plans on having a completely different rendering than in the current version of ACES, the question has arose: What about people who want to use a different rendering, like the current RRT?
The answer currently given, is that the rendering should be applied as a look modification transform (LMT), together with the inverse of the ACES RRT. Then, ACES will apply its RRT, which cancels out with the inverse and thus you have the custom rendering.
This is a problematic solution. As the name implies, a look modification transform is meant to modify the look of the RRT, not completely replace it. The need to apply the inverse of the ACES RRT highlights the strangeness as well. Applying the inverse RRT only to then be cancelled out by the RRT, does not come for free either. These calculations reduce precision in the final image. And, any output colour not producible by the RRT, won’t be available to any other rendering as well.
ACES is a standard and on the homepage it says “ACES can be used on any type of production from features to television, commercials, AR/VR and more.” It’s supposed to be able to be used by anyone and everyone, yet if someone wants to use a different rendering, then they have to jump through hoops to hack it into ACES.
ACES is a standard that is supposed to be used by everyone, but every project requires a different rendering. Therefore, the only logical conclusion can be that ACES should provide a few different kinds of renderings, that colorists can then use as a starting point, modifying them to create the look that they are after.
The original RRT can be supplied for backwards compatibility.
A chromaticity-linear rendering can be supplied for projects that require a neutral rendering.
A new-and-improved film-like rendering can be supplied for projects that go for such a look.
Some people want the highlight roll-off, but no or barely any contrast in the shadows. This can be seen in Pixar and Disney Animation movies, where they want a bright look. So, such a rendering can also be applied.
And, an option to apply no rendering at all, an identity rendering. For HDR, you might want to be as true to life as possible and not apply any rendering, or maybe you just like the look of not having the image clip instead of rolling off the highlights.
These five different kinds of renderings should be a great starting point for many projects, allowing storytellers to include the choice of rendering as part of their storytelling toolkit. In the event that someone does want to use a custom rendering, then they can simply tell ACES to apply no rendering and implement the custom rendering as an LMT.
Yes, I said that you shouldn’t have to use LMTs to replace a rendering, but with my proposal, there is no rendering to replace since you are telling ACES to not apply a rendering itself. Right now, if you want to use a custom rendering, then you also have to apply the inverse RRT, just so that ACES can apply the RRT again. That’s bad, but with my proposal that isn’t necessary anymore. No loss of precision and all colours are now available to the custom rendering. You could open up the renderings to allow for custom renderings to be inserted into the imaging pipeline, but that would introduce significant complexity and the LMT framework already supports everything needed. It would be simpler to use LMTs for the few cases that require custom renderings, than to have the rendering transforms also behave like LMTs.
Camera manufacturers, like ARRI, could even provide specific ACES versions of their renderings. Colorists can then tell ACES to not apply any rendering, colour grade the images like they wish, and lastly apply the custom rendering as, for example, a LUT. Because it is in the ACES framework, the custom rendering will still be display device agnostic, so you can still export it to any display colour space that you’d want.
In Davinci Resolve when using ACES, you can specify the display colour space to use. With my proposal, a new dropdown box would be added below it, where you can specify which rendering you’d want to use. It’s simple and everyone immediately understands it.
OpenColorIO already has the framework for this. It provides view transforms where the user can specify which view transform they want to be applied to the image. View transform is just OpenColorIO’s name for a rendering transform. The different renderings can be provided as built-in functions, just like the current RRT is, and then a view transform can be created for each one.
Display gamut mapping is something that I also view as crucial. What is important to remember is that display gamut mapping is supposed to be done as part of the display transform and not a rendering transform. Rendering transforms are supposed to be display agnostic. Gamut mapping is about converting a colour from one gamut to another, which is precisely what a display transform, among other things, does. The gamut mapping method would have to know the gamut to transform to, so having it in the rendering transform, would mean that you’d need to have a different rendering transform for every display colour space. Additionally, in ACES there is the concept of OCES, which is where the rendering has been applied but the colours are in the ACES 2065-1 colour space, meant for archival. If the gamut mapping is done in the rendering transform, then which gamut is the image gamut mapped to when in OCES? The only answer should be that there shouldn’t be any gamut mapping applied to it yet. The only logical place for display gamut mapping to occur, is in the display transform and not in the rendering transform.
Now, I see two possibilities with this. Either one display gamut mapping method is agreed upon and it is applied in every display transform, always. Or, multiple display gamut mapping methods are provided and the user can choose which one to use.
Using only one gamut mapping method and always applying it, creates for a very simple user experience. It is easy to implement and I doubt that that many users of ACES have strong feelings about being able to choose which gamut mapping method to use, as long as it’s not gamut clipping. However, it would then also be applied when using the current ACES RRT, which would mean that there isn’t 100% backwards compatibility. How bad that is, can be debated.
Providing multiple methods gives control to colorists. They can choose the method that works best for their project. They could also choose gamut clipping, together with the current ACES RRT and get 100% backwards compatibility. In Davinci Resolve, there can simply be another dropdown box next to the one with which you choose the rendering to use. In OpenColorIO however, it becomes more complicated. OpenColorIO does not provide a framework for this and I don’t believe that it should. I see no intuitive way that can be implemented in every program that uses OpenColorIO. The only solution would be for OpenColorIO to provide the different gamut mapping methods and for the configuration to simply choose one. If you want to use a different gamut mapping method, you change the configuration. But, do we really need multiple gamut mapping methods?
I strongly believe that display gamut mapping should be applied and that it should be a part of the display transform. Whether to only have one that is used everywhere or to provide multiple for the user to choose, is a matter of opinion. I think that both are fine choices, but at this point in time I think that going for only one method would be best. Later on, we can always add support for other gamut mapping methods, but we can never take that ability away.
I hope that the next version of ACES is going to be something that will be good for everyone. Whether it is using my solution or not, I don’t care. I just want people to be able to make the stuff that they want with ACES.