RenderDotC's default behavior is to generate 8 bit linear data. This means that each channel (red, green, blue, and alpha) spans from nominal black at 0 to nominal white at 255. While this is often acceptable, RenderDotC is capable of producing higher quality output. When rendering for film, the 8 bit model may run out of steam. This document explains why rendering for film is more demanding on color quantization (especially when composited with live action) and how to achieve superior results with RenderDotC.
Conversion of 10-bit Log Film Data To 8-bit Linear or Video Data by Kodak Picture & Television ImagingThe following is a layman's explanation:
Grayscale Transformations of Cineon Digital Film Data by Cinesite Digital Film Center
Film has a tremendous dynamic range [4]. It starts at black and must be overexposed mercilessly before it becomes totally saturated and can get no whiter. Instead of considering the total possible range of film, we start by focusing on the normal range of intensities that might occur when filming an indoor or outdoor scene, from a reference black to a reference white. Cameras are sometimes calibrated by holding benchmark gray or white cards in front of the lens. Such cards are of a known intensity expressed as a percentage of 100% reference white. The 18% gray card is used in film photography and television cameras are calibrated against the 90% white card [2].
The Cineon format uses ~1% black card as reference black and 90% white card as reference white [1]. On film, total saturation is perhaps 20 times brighter than reference white. This additional range is called "headroom" and can be used to capture specular highlights on water or chrome. Cineon's digital negative also captures the extended headroom. A logarithmic scale is employed to focus the numeric precision where it is needed most on the darker shades, sacrificing precision in the headroom. Cineon uses 10 bits to avoid contouring (mach bands) in digital images. Therefore, this format is known as "10-bit log".
Quantize type one min max ditheramplitudeThe arguments to RiQuantize are as follows:
Another limitation is that colors and alpha must have identical quantization parameters [3]. As we shall see later, RenderDotC overcomes this limitation with an implementation specific extension to the RenderMan standard.
Quantize "rgba" 65535 0 65535 0.5 # 16 bit integerOr even 32 bit floating point:
Quantize "rgba" 0 0 0 0 # 32 bit floating point[Note that min, max, and ditheramplitude are meaningless when using floating point output. Here, they are arbitrarily set to all zeros.]
When quantizing to integers, RenderDotC looks only at max to determine how many bits per channel to use. Possible values are 1, 2, 4, 8, or 16 bits per channel. The smallest number of bits that will accommodate max is selected. It makes sense to choose a value for max that uses all of the bits. In the example above, we wanted to use 16 bits so we set max = 216 - 1 = 65535. This is the largest possible unsigned 16 bit number.
The standard (but shortsighted) approach that is often taken when recording computer generated images (CGI) to film is to align 0 with reference black and max with reference white. This works reasonably well except that the extended headroom of the film goes unused. Bright highlights in the CGI just aren't as bright as they should be. If the CGI is mixed with live action, it may look dull and flat by comparison [4].
Here's where the one parameter of RiQuantize comes into play. We can set one to some value less than max, align one with reference white, and leave the range from one to max for the extended headroom. If an object in the scene is a fully illuminated white object, the shader will return Ci = color(1.0) and reference white will be met. A specular highlight off of chrome may produce an even more intense color such as color(5.0). Be careful that the shader does not arbitrarily clamp all colors to 1.0. Otherwise, the headroom will never be exercised.
What's a good value for one? The perfect value for covering the same extended range as 10 bit log is about 4829. This is close enough to 212 - 1 = 4095 that we may substitute it for the convenience of nice round numbers:
Quantize "rgba" 4095 0 65535 0.5Don't make the mistake of choosing 1023 just because that sounds like a good value for 10 bit log. The RenderMan quantization space remains linear. 10 bits in linear space has nothing to do with 10 bit log. Choosing 1023 results in leaving more headroom than can be captured on film at the cost of reduced precision below reference white.
For images not created by RenderDotC, reference white may be explicitly set when building the texture or environment maps. For more information, see the documentation on the refwhite option.
Quantize "rgb" 4095 0 65535 0.5 # 16 bit linear with headroomRenderDotC still recognizes the type "rgba" and simply applies the quantization to both RGB and A.
Quantize "a" 65535 0 65535 0.5 # Straight 16 bit linear