Utility Shaders

Gamma/Gain Nodes


declare shader "mip_gamma_gain" (
        color   "input",
        scalar  "gamma"        default 1.0,
        scalar  "gain"         default 1.0,
        boolean "reverse"      default off,
    )
    apply texture, environment, lens
    version 1
end declare

This is a simple shader that applies a gamma and a gain (multiplication) if a color. Many similar shaders exists in various OEM integrations of mental ray, so this shader is primarily of interest for standalone mental ray and for cross platform phenomena development.

If reverse is off, the shader takes the input, multiplies it with the gain and then applies a gamma correction of gamma to the color.

If reverse is on, the shader takes the input, applies a reverse gamma correction of gamma to the color, and then divides it with the gain; i.e. the exact inverse of the operation for when reverse is off.

The shader can also be used as a simple gamma lens shader, in which case the input is not used, the eye ray color is used instead.

Render Subset of Scene

This shader allows re-rendering a subset of the objects in a scene, defined by material, geometric objects, or instance labels. It is the ideal "quick fix" solution when almost everything in a scene is perfect, and just this one little object or one material needs a small tweak1.

It is applied as a lens shader and works by first testing which object an eye-ray hits, and {only if the object is part of the desired subset is it actually shaded at all}.

Pixels of the background and other objects by default return transparent black (0 0 0 0), making the final render image ideal for compositing directly on top of the original render.

So, for example, if a certain material in a scene did not turn out satisfactory, one can simply:

An example of using <i>mip_render_subset</i> on one material

An example of using mip_render_subset on one material

Naturally, only pixels which see the material "directly" are re-rendered, and not e.g. reflections in other objects that show the material.

The shader relies on the calling order used in raytracing and does not work correctly (and yields no benefit in render time) when using the rasterizer, because the rasterizer calls lens shaders after already shading the surface(s).


declare shader "mip_render_subset" (
        boolean "enabled"        default on,
        array geometry "objects",
        array integer  "instance_label",
        material       "material",
        boolean "mask_only"      default off,
        color   "mask_color"     default 1 1 1 1,
        color   "background"     default 0 0 0 0,
        color   "other_objects"  default 0 0 0 0,
        boolean "full_screen_fg" default on
    )
    apply lens
    version 5
end declare

enabled turns the shader on or off. When off, it does nothing, and does not affect the rendering in any way.

objects, instance_label and material are the constraints one can apply to find the subset of objects to shade. If more than one constraint is present, all must be fulfilled, i.e. if both a material and three objects are chosen, only the object that actually have that material will be shaded.

If one do not want to shade the subset, but only find where it is on screen, one can turn on mask_only. Instead of shading the objects in the subset, the mask_color is returned, and no shading whatsoever is performed, which is very fast.

Rays not hitting any objects return the background color, and rays hitting any object not in the subset return the other_objects color.

Finally, the full_screen_fg decides if the FG preprocess should apply to all objects, or only those in the subset. Since FG blends neighboring FG samples, it is probable that a given object may use information in FG points coming from nearby objects not in the subset. This is especially true if the objects are coplanar. Therefore it is advised to let the FG pre-pass "see" the entire scene.

Naturally, turning off this option and creating FG points only for the subset of objects is faster, but there is a certain risk of boundary artifacts, especially in animations. If the scene uses a saved FG map, this option can be left off.

Binary Proxy

This shaders allows a very fast way to implement demand loaded geometry. It's main goal is performance, since it leapfrogs any form of translation or parsing by directly writing to a binary file format which can be sucked directly into RAM at render time. There are many other methods to perform demand loading in mental ray (assemblies, file objects, geometry shaders, etc.) but this may require specific support in the host application, and generally involves parsing or translation steps that can impact performance.

To use it, the shader is applied as a geometry shader in the scene. See the mental ray manual about geometry shaders.


declare shader
    geometry "mip_binaryproxy" (
        string "object_filename",

boolean "write_geometry", geometry "geometry", scalar "meter_scale", integer "flags" ) version 4 apply geometry end declare

object_filename is the filename to read (or write). By convention, the file extension is ".mib" for "mental images binary".

The shader has a "read" mode and a "write" mode:

The meter_scale allows the object to be interpreted in a unit independent way. If this is 0.0, the feature is not used. When used, the value should be the number of scene units that represent one meter, i.e. if scene units are millimeters this would be 1000.0 etc.

When writing (write_geometry is on), this value is simply stored as meta data in the .mib file. When reading (write_geometry is off) the object is scaled by the ratio of the value stored in the file and the value passed at "read time", to account for the difference in unit, if any.

The flags parameter is for algorithm control and should in most cases be left to 0. It is a bit flag, with each bit having a specific meaning.

Currently used values are:

All other bits should be kept zero, since they may become meaningful in future versions.


Footnotes
1
And the client is on his way up the stairs.
2
Note that when baking displacement, a view-dependent approximation can not be used. This is because there is no view set up at the time when this shader executes, so the resulting tessellation will turn out very poor.