Maya Shaders

Wednesday, August 24, 2005

Types of shaders - as defined by MR

Material Shaders

Material shaders are the primary type of shaders. All materials defined in the scene must at least define a material shader. Materials may also define other types of shaders, such as shadow, volume, photon, and environment shaders, which are optional and of secondary importance.

When mental ray casts a visible ray, such as those cast by the camera (called primary rays) or those that are cast for reflections and refractions (collectively called secondary rays), mental ray determines the next object in the scene that is hit by that ray. This process is called intersection testing. For example, when a primary ray cast from the camera through the viewing plane's pixel (100, 100) intersects with a yellow sphere, pixel (100, 100) in the output image will be painted yellow. (The actual process is slightly complicated by supersampling and filtering, which can cause more than one primary ray to contribute to a pixel.)

The core of mental ray has no concept of 'yellow.' This color is computed by the material shader attached to the sphere that was hit by the ray. mental ray records general information about the sphere object, such as point of intersection, normal vector, transformation matrix etc. in a data structure called the state, and calls the material shader attached to the object. More precisely, the material shader, along with its parameters (called shader parameters), is part of the material, which is attached to or inherited by the polygon or surface that forms the part of the object that was hit by the ray. Objects are usually built from multiple polygons and/or surfaces, each of which may have a different material.

Material shaders normally do quite complicated computations to arrive at the final color of a point on the object:

  • The shader parameters usually include constant ambient, diffuse, and specular colors and other parameters such as transparency, and possibly optional textures that need to be evaluated to compute the actual values at the intersection point. If textures are present, texture shaders are called by using one of the lookup functions provided by mental ray. Alternatively, shader assignment may be used for texturing.

  • The illumination computation sums up the contribution from various light sources listed in the shader parameters. To obtain the amount of light arriving from a light source, a light shader is called by calling a light trace or sample function provided by mental ray. Light shaders are discussed in a separate section below. After the illumination computation is finished, the ambient, diffuse, and specular colors have been combined into a single material color (assuming a more conventional material shader).

  • If the material is reflective, transparent, or using refraction, as indicated by appropriate shader parameters, the shader must cast secondary rays and apply the result to the material color calculated in the previous step. (Transparency is a variation of refractive transparency where the ray continues in the same direction, while refraction rays may alter the direction based on an index of refraction.) Secondary rays, like primary rays, cause mental ray to do intersection testing and call another material shader if the intersection test hit an object. For this reason, material shaders must be reentrant. In particular, a secondary refraction or transparency ray will hit the back side of the same object if face both is set in the options and the object is a closed volume.

    Texture shaders

    Texture shaders evaluate a texture and typically return a color, scalar, or vector (but like any shader, return values can be freely chosen and may even be structures of multiple values). Textures can either be procedural, for example evaluating a 3D texture based on noise functions or calling other shaders, or they can do an image lookup. The texture shader needs to know which point on the texture to look up, as a vector assigned to the coord parameter. Coordinate lookups are a very flexible way to implement all sorts of projections, wrapping, scaling, replication, distortion, cropping, and many other functions, so this is also implemented as another shader. It could be done inside the texture lookup shader itself, but separating it out into a separate shader allows all those projections and other manipulations to be implemented only once, instead of in every texture shader

    Volume Shaders

    Volume shaders may be attached to the camera or to a material. They modify the color returned from an intersection point to account for the distance the ray traveled through a volume. The most common application for volume shaders is atmospheric fog effects; for example, a simple volume shader may simulate fog by fading the input color to white depending on the ray distance. By definition, the distance dist given in the state is 0.0 and the intersection point is undefined if the ray has infinite length.

    Volume shaders are normally called in three situations. When a material shader returns, the volume shader that the material shader left in the state- >volume variable is called, without copying the state, as if it had been called as the last operation of the material shader. Copying the state is not necessary because the volume shader does not return to the material shader, so it is not necessary to preserve any variables in the state.

    Unless the shadow segment mode is in effect, volume shaders are also called when a light shader has returned; in this case the volume shader state- >volume is called once for the entire distance from the light source to the illuminated point (i.e., to the point that caused the material shader that sampled the light to be called). In shadow segment mode, volume shaders are not called for light rays but for every shadow ray segment from the illuminated point towards the light source. Some volume shaders may decide that they should not apply to light rays; this can be done by returning immediately if the state- >type variable is miRAY_LIGHT.

    Finally, volume shaders are called after an environment shader was called. Note that if a volume shader is called after the material, light, or other shader, the return value of that other shader is discarded and the return value of the volume shader is used. The reason is that a volume shader can substitute a non-black color even if the original shader has given up. Volume shaders return miFALSE if no light can pass through the given volume, and miTRUE if there is a non-black result color.

    Material shaders have two separate state variables dealing with volumes, volume and refraction_volume. If the material shader casts a refraction or transparency ray, the tracing function will copy the refraction volume shader, if there is one, to the volume shader after copying the state. This means that the next intersection point finds the refraction volume in state- >volume, which effectively means that once the ray has entered an object, that object's interior volume shader is used. However, the material shader is responsible to detect when a refraction ray exits an object, and overwrite state- >refraction_volume with an appropriate outside volume shader, such as state- >camera- >volume, or a volume shader found by following the state- >parent links.

    Since volume shaders modify a color calculated by a previous material shader, environment shader, or light shader, they differ from these shaders in that they receive an input color in the result argument that they are expected to modify.

    Environment shaders

    Environment shaders provide a color for rays that leave the scene entirely, and for rays that would exceed the trace depth limit

    Light Shaders

    Light shaders are called from other shaders by sampling a light using the mi_sample_light or mi_trace_light functions, which perform some calculations and then call the given light shader, or directly if a ray hits a source. mi_sample_light may also request that it is called more than once if an area light source is to be sampled. For an example for using mi_sample_light, see the section on material shaders above. mi_trace_light performs less exact shading for area lights, and is provided for backwards compatibility only.

    The light shader computes the amount of light contributed by the light source to a previous intersection point, stored in state- >point. The calculation may be based on the direction state- >dir to that point, and the distance state- >dist from the light source to that ray. There may also be shader parameters that specify directional and distance attenuation. Directional lights have no location; state- >dist is undefined in this case.

    Light shaders are also responsible for shadow casting. Shadows are computed by finding all objects that are in the path of the light from the light source to the illuminated intersection point. This is done in the light shader by casting `` shadow rays'' after the standard light color computation including attenuation is finished. Shadow rays are cast from the light source back towards the illuminated point (or vice versa if shadow segment mode is enabled), in the same direction of the light ray. Every time an occluding object is found, that object's shadow shader is called, if it has one, which reduces the amount of light based on the object's transparency and color. If an occluding object is found that has no shadow shader, it is assumed to be opaque, so no light from the light source can reach the illuminated point.

    Post a Comment

    << Home