Advanced_Renderman_Book[torrents.ru]
.pdf80 3 Describing Models and Scenes in RenderMan
ShadingRate area
Shadi ngRate specifies how often shaders are run on the surface of the geometric primitive. This is done by specifying a maximum distance between shading samples. If the shader on a primitive is not run frequently enough-that is, the runs (samples) of the shader are spaced too far apart-then a large region of the surface will have the same color, and a lot of the subtle detail of the texture will be lost. If the shader on a primitive is run too frequently-that is, the shader samples are unnecessarily close together-then most shader samples return the same color as neighboring samples, and this redundant calculation wastes a lot of valuable compute cycles.
For historical reasons, shading rate is specified as an area rather than a frequency (that is, a rate). The parameter area is a floating point number that gives the area in pixels of the largest region on the primitive that any given shader sample may represent. Typically, a shading rate of 1.0, meaning one unique shading sample on each pixel, is used in high-quality renderings. Preview renderings, or renderings that don't require calculation of the surface color (for example, depth-only images), could set the shading rate significantly sparser (4.0 or 8.0) and still get an acceptable image.
ShadingInterpolation style
Occasionally, the renderer needs to know a primitive's color at some point between the shading samples. That point is too close to the existing samples to warrant another whole run, so the renderer simply interpolates between the existing values. The string parameter style controls the type of interpolation. If style is "constant", then the renderer simply reuses one of the neighboring shader samples (creating a small region with constant color). If style is "smooth", then the renderer does a bilinear interpolation between the neighboring values to get a smooth approximation of the color. This technique is often called Gouraud shading.
3.5.1Sidedness
As with most graphics APIs, RenderMan's geometric primitives are thin sheets. That is, they are 2D (no thickness) objects that have been placed and generally bent into the third dimension. While it is convenient to think of them as being like sheets of paper floating in 3D, a sheet of paper has one very important property that RenderMan primitives do not have. Paper has two sides. RenderMan primitives have only one.
That is to say, the natural method for defining RenderMan's geometric primitives includes the definition of the surface normal, which points away from the surface perpendicularly. And it points in only one direction, which we call the front. The direction that it doesn't point, we call the back. Primitives whose front sides are
3.5 Other Shading Attributes
visible from the camera position we call front facing, and naturally those whose back sides are visible from the camera position we call back facing.
For two reasons, it is common for renderers to care about the difference between back facing and front facing sections of the primitive. First, most shading equations are sensitive to the direction of the surface normal and will generally create black (or otherwise incorrectly shaded) images of any objects that are back facing. This is generally repaired by reversing the direction of the bad normals whenever they are visible (in the Shading Language, this is done by the faceforward routine). Second, many objects being modeled are solid objects, completely enclosed with no holes and no interiors visible. It should be obvious that any back facing primitives would be on the far side of the object and as such would never be seen (they will be covered up by front facing primitives on the near side). The renderer can save time by throwing those never-visible primitives away, a process called backface culling. However, this should not be done on objects that have holes or are partially transparent, because if we can see "inside" the object, we will intentionally see the back facing primitives in the rear.
For these reasons, the modeler should explicitly state whether an object is intended to be seen from both sides, or only the front. And if only from the front, it is convenient to be able to explicitly state which side the modeler considers the front to be! RenderMan has calls that control both of these attributes.
Sides n
Sides is a very simple call. If the integer parameter n is 1, the object is onesided. The renderer can assume that the primitive will be seen only from the front and can backface cull it or any portion of it that is seen from the back. If n is 2, the object is two-sided. The renderer must assume it can legally be seen from both sides. The renderer will not automatically flip surface normals on the back facing portions of a two-sided primitive-the shader must do this if it is appropriate.
Orientation direction
Ori entati on is also a very simple call, although many RenderMan users find it very confusing. Given that the standard mathematics of each primitive defines which way the default surface normal will face, Orientation simply controls whether we use that default definition or whether we should instead flip the normal around and "turn the primitive inside out." If direction is "outside", the default normal is used, and the primitive is outside out. If direction is "inside", the normal is flipped, and the primitive is considered to be inside out.
Figuring out the standard mathematics of each primitive is the confusing part. This is because the transformation matrix influences the mathematics. Transformation matrices are either right-handed or left-handed, depending on their data (see Section 2.4.4). Geometric primitives that are created in a
82 3 Describing Models and Scenes in RenderMan
right-handed coordinate system use the "right-hand rule" (sometimes called the counterclockwise rule) to find the default ("outside") surface normal. Geometric primitives that are created in a left-handed coordinate system use the "left-hand rule" (also known as the clockwise rule) to find it.
Reverse0rientation
Reverse0rientation flips the value of orientation. It's there because it is sometimes handy for modelers to use.
3.5.2Renderer Control
In addition to the visual appearance parameters, the RenderMan API provides the opportunity for the modeler (or user) to set various other attributes that control the way that the renderer handles geometric primitives. Some of these attributes control details of advanced rendering features and are described with them in the following sections. Other parameters are very renderer specific-they are passed into the renderer through the generic extension backdoors like Attribute.
GeometricApproximation type value
The GeometricApproximation call provides a way for modelers to control the way that the renderer approximates geometric primitives when it tessellates them or otherwise converts them into simpler representations for ease of rendering. The predefined type of "flatness" specifies an error tolerance so that tessellations do not deviate from the true surface by more than value pixels. Renderers can define additional type metrics that apply to their particular algorithm.
Attribute type parameterlist
Attribute provides the entry point for rendererspecific data that are specific to the geometric primitives or individual lights and therefore fall into the category of attributes. For example, both PRMan and BMRT have attributes that control tessellation styles, set arbitrary limits on various calculations that are theoretically unbounded, determine special light characteristics, and so on. These options are generally very algorithm specific, and as a result renderers tend to have non-overlapping sets. Renderers ignore attributes they do not understand, so it is possible to write RIB files that contain the appropriate sets of parameters for multiple renderers.
ErrorHandler style
The ErrorHandler call determines what the renderer should do if it detects an error while reading a scene description or rendering it. There are several choices for style that determine how paranoid the renderer should be about various problems, which come in three severities: warnings, which say the
83
3.6Lights
renderer found something wrong but tried to correct it, so it may or may not affect the final image; errors, which say the image will probably have clear flaws;
and severe errors, where the renderer's internal state is badly trashed, and an image probably cannot even be generated. The exact severity of an individual problem is up to the renderer to determine, of course.
The three built-in error handlers are:
□"ignore": ignore all errors, continue to render regardless □"print": print warning and error messages, but continue to render
□"abort": print warning messages, but print and terminate on any error
In the C binding, the user can supply an error-handling subroutine that is called at each such error, and this subroutine will then have to make a determination of what to do based on the error's type and severity.
3.6Lights
Most shaders will not generate a very interesting color in the dark. RenderMan, like all graphics APIs, has virtual light sources that are modeled as objects in the scene. The light sources emit light, which is then reflected toward the camera by the surface shaders on the objects, giving them color.
The RenderMan Interface defines two types of light sources: point lights and area lights. Point lights usually refer to light sources that emit light from a single point in space; area lights usually refer to light sources that emit light from some or all points in a specific (but finite) region of space. In RenderMan, of course, the exact details of the way that light emanates from a light source, and then interacts with the surface, is under the complete control of the Shading Language shaders that are associated with them. For this reason, it is probably more descriptive to call RenderMan light sources either nongeometric or geometric.
Nongeometric lights are light sources for which there is a single "ray" of light from the light source to any given point on the surface. The origin of the ray of light might not be unique (since it could be under shader control), but regardless, no geometric primitive shapes are needed in the RIB file to describe the sources of the rays.
LightSource shadername handle parameterlist
LightSource creates a nongeometric light source. The exact manner that light is emitted from the light source is described by the Shading Language shader shadername. The parameters for the shader are in the parameterlist. In order to refer to this specific light source later, the light source is given a unique identifier, the integer handle. It is an error to reuse a light handle, because the modeler will lose the ability to refer to the original light source.
84 3 Describing Models and Scenes in RenderMan
In the C binding for RiLightSource, the routine does not take a handle parameter, but instead returns a handle value to the modeling program (which is guaranteed to be unique).
Geometric lights are light sources for which there are multiple rays of light, emanating from independent positions, streaming toward each point on the surface. The easiest way to describe those positions is to associate some geometric shape with the light source. In RenderMan, that geometric shape is specified with normal geometric primitives in the RIB file. Those primitives might be considered the actual body of the emissive object, like a light bulb or a light panel.
AreaLi ghtSource shadername handle parameterlist
AreaLi ghtSource starts the definition of a geometric (area) light source. The exact manner in which light is emitted from the light source is described by the Shading Language shader shadername, but the region of space that will be used to generate the multiple light source emission positions is defined by the standard RenderMan geometric primitives that follow this call. The area light stops accumulating new primitives when a new (or a "nul 1 ") area light is started or when the current attribute block is ended.
As with Li ghtSource, an AreaLi ghtSource is identified by the integer handle in RIB and by the return value of RiAreaLightSource in C.
Not all rendering algorithms can handle area light sources, as computing them with enough visual quality to be believable is a difficult problem. Of the RenderMan renderers, BMRT can do so, but PRMan cannot. If an area light is requested of PRMan, a point light is generated instead, at the origin of the current coordinate system.
3.6.1Illumination List
Light sources are handled in a slightly more complex manner than you might first guess, for two reasons. First, we would like to place light sources in their "obvious" place in the geometric transformation hierarchy (for example, the headlights of a car would be easiest to describe while we are describing the car's front-end grill area). However, we must abide by the RenderMan requirement that all attributes that affect objects must appear before those objects (so the headlights must be defined before the asphalt road is defined). This puts significant (and sometimes contradictory) restrictions on the timing of Light Source calls versus geometry in the scene description. Second, because this is computer graphics, we would like some flexibility about which objects are illuminated by which lights. In fact, it is almost required that any subset of the lights in the scene can shine on any subset of the geometric primitives in the scene. In this sense, the light sources that illuminate an object should be part of the visual attributes of that object.
85
3.7 External Resources
Both of these problems are solved by the concept of an illumination list. The light sources themselves are not attributes of the primitive, they are independent objects in the scene description. However, the graphics state has an attribute that is the list of all lights currently illuminating geometric primitives. This list can be modified as an attribute, adding or deleting lights from the list. Equivalently, you might think of light sources as having an attribute in the graphics state that says whether they are "on" or "off." In either case, because the list is an attribute, pushing the attribute stack allows you to modify the lights temporarily and then restore them to their previous state by popping the stack.
Illuminate handle state
Illuminate turns a light source on or off. The light source to be modified is specified by handle, the light source handle that was given to the light source when it was created. state is an integer, which is either 1 for on or 0 for off.
A light source is on when it is defined. It then illuminates all subsequent geometric primitives. It can be turned off and back on by using the Illuminate call. When the attribute stack where the light source was created is finally popped, the light source is turned off. However, the light sources themselves are global to the scene-it is only the light list that is an attribute. Therefore, popping the stack does not destroy the light, it only turns it off. Later, it can then be turned back on at any point in the scene description, even outside the hierarchical tree where it was originally defined. This is how we permit lights to illuminate objects outside their hierarchy, such as the car headlights illuminating the road.
Some modeling systems have the idea of "local lights" and "global lights." Local lights are specific to particular objects, whereas global lights are available to shine on any objects. RenderMan doesn't have those specific concepts, as they are subsets of the more flexibile illumination lists. Local lights are merely buried in the geometric hierarchy of a particular object and are never turned on for other objects. Global lights are emitted early in the model (usually immediately after WorldBegin, before all of the real geometric primitives) and then are turned on and off as appropriate.
3.7External Resources
In addition to the scene description, which is provided to the renderer through one of the RenderMan APIs (either C or RIB), the renderer needs access to several types of external data sources in order to complete renderings. These files-texture maps, Shading Language shaders, and RIB archives-are identified in the scene description by a physical or symbolic filename, and the renderer finds and loads them when it needs the data.
863 Describing Models and Scenes in RenderMan
3.7.1Texture Maps
Texture maps are relatively large data files, and it is quite common to use a lot of them in a high-quality scene. Of course, images come in a wide variety of file formats, and not all of these are necessarily appropriate for loading on demand by the renderer. In fact, over the years, computer graphics researchers have sometimes found it valuable to preprocess image data into some prefiltered form (such as mip-maps) in order to save computation or speed data access during the rendering phase. PRMan, for example, requires all image data to be converted into a private format that was specifically designed for fast access and low memory footprint. Similarly, while BMRT is able to read ordinary TIFF files directly during rendering, it benefits greatly from converting the file into a mip-map prior to rendering.
The RenderMan Interface provides for the situation where a renderer cannot, or chooses not to, read arbitrary image file formats as textures. There are four entry points for converting "arbitrary" image files into texture files, specialized to the particular texturing functions defined in the Shading Language. These functions provide the ability to filter the pixels during this conversion process. Therefore, several of the calls have parameters that specify a filtering function and texture wrapping parameters.
The filter functions for texture are the same ones used by PixelFi1tor and have filter kernel width parameters measured in pixels. Texture wrapping refers to the value that should be returned when the texture is accessed beyond the range of its data (that is, with texture coordinates outside the unit square). The texture's wrap mode can be "black", returning 0 outside of its range; "periodic", repeating (tiling) itself infinitely over the 2D plane; or "clamp", where the edge pixels are copied to cover the plane. The texture filtering code takes the wrapping behavior into account in order to correctly filter the edges of the texture. Both filtering and wrapping are independently controllable in the s (horizontal) and t (vertical) directions on the texture.
MakeTexture imagename texturename swrap twrap filter swidth twidth parameterlist
A simple texture map is an image that is accessed by specifying a 2D rectangle of coordinates on the map. It will typically be a single-channel (monochrome) map or a three-channel or four-channel (color or RGBA) map, although a fancy shader might use the data for something other than simple object color, of course.
MakeLatLongEnvi ronment imagename texturename fiiter swidth twidth parameterlist
An environment map is an image that is accessed by specifying a direction vector. The map is assumed to wrap around and enclose the origin of the vector, and the value returned is the color where the ray hits the map. A latitude-longitude environment map wraps around the origin like a sphere, so
87
3.7External Resources
it must wrap periodically in the s direction and it "pinches" at the poles in the t direction. Typically, painted environment maps will be made in this format.
MakeCubeFaceEnvi ronment pximage nximage pyimage nyimage pzimage nzimage texturename fov filter swidth twidth parameterlist
A cube-face environment map wraps around the origin as a box. It requires six input images, one for each face of the box, identified by their axial direction (positive x, negative x, etc.). Typically, rendered environment maps will be made in this format, by placing the camera in the center and viewing in each direction.
For better filtering, the images can be wider than the standard 90-degree views so that they overlap slightly. The floating point fov parameter identifies the field of view of the cameras that rendered the images.
MakeShadow imagename texturename parameterlist
A shadow map is an image that contains depth information and is accessed by specifying a position in space. The value returned identifies the occlusion of that position by the data in the map. Typically, the map is a rendered image from the point of view of a light source, and the lookup asks if that position can be seen by the light source or if it is hidden behind some other, closer object. Because it makes no sense to filter depth images, there are no filtering parameters to this particular call.
The shadow map also contains information about the position and parameters of the shadow camera, which are used by the Shading Language shadow call to compute the shadow occlusion of objects in the scene. For this reason, it is not possible to create a shadow map out of any random image, but only from those images that have stored this important camera information with the depth information.
3.7.2 Shaders
Shaders are written in the RenderMan Shading Language. Every renderer that supports the Shading Language has a compiler that converts the Shading Language source code into a form that is more easily read and manipulated by the renderer. The object code of the compile is usually some form of byte code representation, for a virtual machine that is implemented by the renderer (though some renderers may compile to machine code). Shader object code is not generally portable between renderer implementations, but is often portable between machine architectures.
Interestingly, unlike texture maps, the RenderMan Interface does not have an API call for converting Shading Language shader source code into shaders. It is assumed that this process will be done off-line. The names of a shader that are provided in Surface and other calls are symbolic, in that the renderer has some algorithm for
88 3 Describing Models and Scenes in RenderMan
finding the right shader object code based on a relatively simple name. Typically, the name refers to the prefix of a filename, where the suffix is known (in PRMan it is . sl o), and the renderer has some type of search path mechanism for finding the file in the directory structure.
3.7.3RIB Archive Files
RIB files are referred to as archive files in the RenderMan Interface Specification document because as metafiles their function was to archive the calls to the C procedural interface. When it was recognized that RIB files might be used as containers of more abstract scene descriptions, rather than merely the log of a session to render a particular frame, some small accommodation was made in the file format for facilitating that.
RIB files may have comments, started by # characters and ending at the following end-of-line. These comments are ignored by the renderer as it is reading the RIB file, but it is possible to put data into these comments that is meaningful to a special RIB parsing program. Such a program could read these comments and act on them in special ways. For this reason, the comments are called user data records. Appendix D of the RenderMan Interface Specification discusses special data records that can be supplied to a hypothetical "render manager" program. The predefined commands to this program are called structure comments, because they mostly deal with nonrendering information that describes the structure of the model that was contained in the RIB file. Such structure comments are denoted by a leading ##. Other comments, random user comments, get a single #.
The C interface has a call for putting such a data record into the scene description (under the assumption that the calls are being recorded in an archive). Moreover, the C interface can read these data records back, giving the application developer a way to put private data into the file and then interpret and act on them when the file is read back in.
RiArchiveRecord( RtToken type, char *format, …)
This call writes arbitrary data into the RIB file. The parameter type denotes whether the record is a regular "comment" or a "structure" comment. The rest of the call is a printf-like list of parameters, with a string format and any number of required additional parameters.
ReadArchive filename
The ReadArchive call will read the RIB file filename. Each RIB command in the archive will be parsed and executed exactly as if it had been in-line in the referencing RIB file. This is essentially the RIB file equivalent of the C language #i nclude mechanism.
In the C API version of this routine, there is a second parameter that is a modeler function that will be called for any RIB user data record or structure
89
3.8Advanced Features
comment found in the file. This routine has the same C prototype as RiArchiveRecord. Supplying such a callback function allows the application program to notice user data records and then execute special behavior based on them while the file is being read.
3.8Advanced Features
The RenderMan Interface has specific API calls to enable and control many advanced image synthesis features that are important for the highest-quality images, such as those used in feature-film special effects. Some of these features have been part of the computer graphics literature for decades, but RenderMan is still the only graphics API that supports them.
3.8.1Motion Blur
In any computer-generated animation that is supposed to mesh with live-action photography, or simulate it, one extremely important visual quality is the appearance of motion blur. In order to adequately expose the film, live-action camera shutters are open for a finite length of time, generally on the order of 1 /48 of a second per frame. Fast-moving objects leave a distinctive blurry streak across the film. Rendered images that don't simulate this streak appear to strobe because the image appears sharply and crisply at one spot in one frame and then at a different spot in the next frame. Any computer graphics imagery that does not have the same motion blur streak as a real photograph will be instantly recognizable by audiences as "fake." Stop-motion photography had this strobing problem, and when mechanisms for alleviating it (such as motion control cameras and ILM's "go-motion" armatures) were invented in the mid-1970s and early 1980s, the increased realism of miniature photography was stunning.
Specifying motion in a RenderMan scene requires the use of three API calls.
Shutter opentime closetime
Shutter is a camera option; that is, it must be called during the option phase of the scene description while other parameters of the camera (such as its position) are being set up. The two floating point parameters specify the time that the virtual camera's shutter should open and close. If opentime and closetime are identical (or if no Shutter statement was found at all), the rendered image will have no motion blur.
The scale of the times is arbitrary-users can choose to measure time in frames, seconds from the beginning of the scene, minutes from the beginning of the movie, or any other metric that is convenient for the modeling system. The choice only matters for two reasons: MotionBegin calls have times with the same time scale, as does the Shading Language global variable time..
