When doing rendering that requires no form of post production,
transparency requires no special consideration. One simply adds
a transparency shader of some sort, and mental ray will render
it correctly, and all is well.
However, as soon as one begins performing post production work
on the image, and one is rendering to multiple frame buffers,
even if simply using mental ray's built in frame buffers such
as "z" (depth) or "m" (motion vectors), special
thought must be put into how transparency is handled.
In general, mental ray collects it's frame buffer data from
the eye ray, i.e. the ray shot by the camera that hits
the first object. So the z-depth, motion vector etc. will come
from this first object.
What if the first object hit is completely transparent? Or is
transparent in parts, such as an image of a tree mapped to a
flat plane and cut out with an opacity mask, standing in front
of a house1?
When using most other transparency related shaders it is most
likely that even though a tree is quite correctly visible in
the final rendering and you can see the house between the branches,
the "z" (depth) (and other frame buffers) will most likely
contain the depth of the flat plane! For most post processing work,
this is undesirable.
To solve this problem the mental ray API contains a function
called mi_trace_continue which continues a ray "as
if the intersection never happened". The shader
mip_card_opacity utilizes this internally, and
switches between "standard" transparency and using
mi_trace_continue to create a "totally" transparent
object at a given threshold.
Introduction