Добавил:
Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:

Introduction to 3D Game Programming with DirectX.9.0 - F. D. Luna

.pdf
Скачиваний:
240
Добавлен:
24.05.2014
Размер:
6.94 Mб
Скачать

314 Chapter 17

The first two components are straightforward, but we see two additional normal vectors, namely faceNormal1 and faceNormal2. These vectors describe the two face normals of the faces that share the edge the vertex lies on, namely face0 and face1.

The actual mathematics of testing if a vertex is on a silhouette edge is as follows. Assume we are in view space. Let v be a vector from the origin to the vertex we are testing—Figure 17.8. Let n0 be the face normal for face0 and let n1 be the face normal for face1. Then the vertex is on a silhouette edge if the following inequality is true:

(1) v n0 v n1 % 0

Inequality (1) is true if the signs of the two dot products differ, making

 

 

Y

the left-hand side negative. Recalling the properties of the dot product,

 

L

the signs of the two dot products being different implies that one face is

front facing and the other is backFfacing.

Now, consider the case where an edge only has one triangle sharing

it, as in Figure 17.9, whose normal will be stored in faceNormal1.

 

M

 

A

 

 

 

 

E

 

T

 

 

 

Figure 17.9: The edge defined by vertices v0 and

 

v1 has only one face sharing it.

We define such an edge to always be a silhouette edge. To ensure that the vertex shader processes such edges as silhouette edges, we let faceNormal2 = –faceNormal1. Thus, the face normals face in opposite directions and inequality (1) will be true, indicating the edge is a silhouette edge.

17.5.3.3 Edge Generation

Generating the edges of a mesh is trivial; we simply iterate through each face in the mesh and compute a quad (degenerate, as in Figure 17.6) for each edge on the face.

Note: Each face has three edges since there are three edges to a triangle.

For the vertices of each edge, we also need to know the two faces that share the edge. One of the faces is the triangle the edge is on. For instance, if we’re computing an edge of the ith face, then the ith face

Team-Fly®

Introduction to Vertex Shaders 315

shares that edge. The other face that shares the edge can be found using the mesh’s adjacency info.

17.5.4 The Silhouette Outlining Vertex Shader Code

We now present the vertex shader for rendering the silhouette edges. The primary task of the shader is to determine if the vertex passed in is on a silhouette edge. If it is, the vertex shader offsets the vertex by some defined scalar in the direction of the vertex normal.

//File: outline.txt

//Desc: Vertex shader renders silhouette edges.

//

// Globals

//

extern matrix WorldViewMatrix; extern matrix ProjMatrix;

static vector Black = {0.0f, 0.0f, 0.0f, 0.0f};

//

// Structures

//

struct VS_INPUT

 

{

 

vector position

: POSITION;

vector normal

: NORMAL0;

vector faceNormal1

: NORMAL1;

vector faceNormal2 : NORMAL2;

};

struct VS_OUTPUT

{

vector position : POSITION; vector diffuse : COLOR;

};

//

// Main

//

VS_OUTPUT Main(VS_INPUT input)

{

//zero out each member in output VS_OUTPUT output = (VS_OUTPUT)0;

//transform position to view space

input.position = mul(input.position, WorldViewMatrix);

P a r t I V

//Compute a vector in the direction of the vertex

//from the eye. Recall the eye is at the origin

//in view space - eye is just camera position.

316 Chapter 17

vector eyeToVertex = input.position;

//transform normals to view space. Set w

//components to zero since we're transforming vectors.

//Assume there are no scalings in the world

//matrix as well.

input.normal.w

 

= 0.0f;

 

input.faceNormal1.w

= 0.0f;

 

input.faceNormal2.w

= 0.0f;

 

input.normal

=

mul(input.normal,

WorldViewMatrix);

input.faceNormal1 =

mul(input.faceNormal1, WorldViewMatrix);

input.faceNormal2 =

mul(input.faceNormal2, WorldViewMatrix);

//compute the cosine of the angles between

//the eyeToVertex vector and the face normals. float dot0 = dot(eyeToVertex, input.faceNormal1); float dot1 = dot(eyeToVertex, input.faceNormal2);

//if cosines are different signs (positive/negative)

//then we are on a silhouette edge. Do the signs

//differ?

if( (dot0 * dot1) < 0.0f )

{

//yes, then this vertex is on a silhouette edge,

//offset the vertex position by some scalar in the

//direction of the vertex normal.

input.position += 0.1f * input.normal;

}

//transform to homogeneous clip space output.position = mul(input.position, ProjMatrix);

//set outline color

output.diffuse = Black;

return output;

}

17.6Summary

Using vertex shaders, we can replace the transformation and lighting stages of the fixed function pipeline. By replacing this fixed process with our own program (vertex shader), we can obtain a huge amount of flexibility in the graphical effects that we can achieve.

Vertex declarations are used to describe the format of our vertices. They are similar to flexible vertex formats (FVF) but are more flexible and allow us to describe vertex formats that FVF cannot describe. Note that if our vertex can be described by an FVF, we can still use them; however, internally they are converted to vertex declarations.

Introduction to Vertex Shaders 317

For input, usage semantics specify how vertex components are mapped from the vertex declaration to variables in the HLSL program. For output, usage semantics specify what a vertex component is going to be used for (e.g., position, color, texture coordinate, etc.).

P a r t I V

Chapter 18

Introduction to Pixel Shaders

A pixel shader is a program executed on the graphics card’s GPU during the rasterization process for each pixel. (Unlike vertex shaders, Direct3D will not emulate pixel shader functionality in software.) It essentially replaces the multitexturing stage of the fixed function pipeline and gives us the ability to manipulate individual pixels directly and access the texture coordinate for each pixel. This direct access to pixels and texture coordinates allows us to achieve a variety of special effects, such as multitexturing, per pixel lighting, depth of field, cloud simulation, fire simulation, and sophisticated shadowing techniques.

You can test the pixel shader version that your graphics card supports by checking the PixelShaderVersion member of the D3DCAPS9 structure and the macro D3DPS_VERSION. The following code snippet illustrates this:

// If the device’s supported version is less than version 2.0 if( caps.PixelShaderVersion < D3DPS_VERSION(2, 0) )

// Then pixel shader version 2.0 is not supported on this device.

Objectives

To obtain a basic understanding of the concepts of multitexturing

To learn how to write, create, and use pixel shaders

To learn how to implement multitexturing using a pixel shader

318

Introduction to Pixel Shaders 319

18.1 Multitexturing Overview

Multitexturing is perhaps the simplest of the techniques that can be implemented using a pixel shader. Furthermore, since pixel shaders replace the multitexturing stage, it follows then that we should have a basic understanding of what the multitexturing stage is and does. This section presents a concise overview of multitexturing.

When we originally discussed texturing back in Chapter 6, we omitted a discussion on multitexturing in the fixed function pipeline for two reasons: First, multitexturing is a bit of an involved process, and we considered it an advanced topic at the time. Additionally, the fixed function multitexturing stage is replaced by the new and more powerful pixel shaders; therefore it made sense not to spend time on the outdated fixed function multitexturing stage.

The idea behind multitexturing is somewhat related to blending. In Chapter 7 we learned about blending the pixels being rasterized with the pixels that were previously written to the back buffer to achieve a specific effect. We extend this same idea to multiple textures. That is, we enable several textures at once and then define how these textures are to be blended together to achieve a specific effect. A common use for multitexturing is to do lighting. Instead of using Direct3D’s lighting model in the vertex processing stage, we use special texture maps called light maps, which encode how a surface is lit. For example, suppose we wish to shine a spotlight on a large crate. We could define a spotlight as a D3DLIGHT9 structure, or we could blend together a texture map representing a crate and a light map representing the spotlight as Figure 18.1 illustrates.

Figure 18.1: Rendering a crate lit by a spotlight using multitexturing. Here we combine the two textures by multiplying the corresponding texels together.

P a r t I V

320 Chapter 18

Note: As with blending in Chapter 7, the resulting image depends on how the textures are blended. In the fixed function multitexturing stage, the blending equation is controlled through texture render states. With pixel shaders we can write the blend function programmatically in code as a simple expression. This allows us to blend the textures in any way we want. We elaborate on blending the textures when we discuss the sample application for this chapter.

Blending the textures (two in this example) to light the crate has two advantages over Direct3D’s lighting:

The lighting is precalculated into the spotlight light map. Therefore, the lighting does not need to be calculated at run time, which saves processing time. Of course, the lighting can only be precalculated for static objects and static lights.

Since the light maps are precalculated, we can use a much more accurate and sophisticated lighting model than Direct3D’s model. (Better lighting results in a more realistic scene.)

Remark: The multitexturing stage is typically used to implement a full lighting engine for static objects. For example, we might have a texture map that holds the colors of the object, such as a crate texture map. Then we may have a diffuse light map to hold the diffuse surface shade, a separate specular light map to hold the specular surface shade, a fog map to hold the amount of fog that covers a surface, and a detail map to hold small, high frequency details of a surface. When all these textures are combined, it effectively lights, colors, and adds details to the scene using only lookups into precalculated textures.

Note: The spotlight light map is a trivial example of a very basic light map. Typically, special programs are used to generate light maps given a scene and light sources. Generating light maps goes beyond the scope of this book. For the interested reader, Alan Watt and Fabio Policarpo describe light mapping in 3D Games: Real-time Rendering and Software Technology.

18.1.1 Enabling Multiple Textures

Recall that textures are set with the IDirect3DDevice9::SetTexture method and sampler states are set with the IDirect3DDevice9::SetSamplerState method, which are prototyped as:

HRESULT IDirect3DDevice9::SetTexture(

DWORD Stage, // specifies the texture stage index IDirect3DBaseTexture9 *pTexture

);

HRESULT IDirect3DDevice9::SetSamplerState(

DWORD Sampler, // specifies the sampler stage index

D3DSAMPLERSTATETYPE Type,

Introduction to Pixel Shaders 321

DWORD Value

);

Note: A particular sampler stage index i is associated with the ith texture stage. That is, the ith sampler stage specifies the sampler states for the ith set texture.

The texture/sampler stage index identifies the texture/sampler stage to which we wish to set the texture/sampler. Thus, we can enable multiple textures and set their corresponding sampler states by using different stage indices. Previously in this book, we always specified 0, denoting the first stage because we only used one texture at a time. So for example, if we need to enable three textures, we use stages 0, 1, and 2 like this:

// Set first texture and corresponding sampler states. Device->SetTexture( 0, Tex1);

Device->SetSamplerState(0, D3DSAMP_MAGFILTER, D3DTEXF_LINEAR); Device->SetSamplerState(0, D3DSAMP_MINFILTER, D3DTEXF_LINEAR); Device->SetSamplerState(0, D3DSAMP_MIPFILTER, D3DTEXF_LINEAR);

// Set second texture and corresponding sampler states. Device->SetTexture( 1, Tex2);

Device->SetSamplerState(1, D3DSAMP_MAGFILTER, D3DTEXF_LINEAR); Device->SetSamplerState(1, D3DSAMP_MINFILTER, D3DTEXF_LINEAR); Device->SetSamplerState(1, D3DSAMP_MIPFILTER, D3DTEXF_LINEAR);

// Set third texture and corresponding sampler states. Device->SetTexture( 2, Tex3);

Device->SetSamplerState(2, D3DSAMP_MAGFILTER, D3DTEXF_LINEAR); Device->SetSamplerState(2, D3DSAMP_MINFILTER, D3DTEXF_LINEAR); Device->SetSamplerState(2, D3DSAMP_MIPFILTER, D3DTEXF_LINEAR);

This code enables Tex1, Tex2, and Tex3 and sets the filtering modes for each texture.

18.1.2 Multiple Texture Coordinates

Recall from Chapter 6 that for each 3D triangle, we want to define a corresponding triangle on the texture that is to be mapped to the 3D triangle. We did this by adding texture coordinates to each vertex.

Thus, every three vertices defining a triangle defined a corresponding triangle on the texture.

Since we are now using multiple textures, for every three vertices defining a triangle we need to define a corresponding triangle on each of the enabled textures. We do this by adding extra sets of texture coordinates to each vertex—one set for, and that corresponds with, each enabled texture. For instance, if we are blending three textures together, then each vertex must have three sets of texture coordinates

P a r t I V

322 Chapter 18

that index into the three enabled textures. Thus, a vertex structure for multitexturing with three textures would look like this:

struct MultiTexVertex

{

MultiTexVertex(float x, float y, float z, float u0, float v0,

float u1, float v1, float u2, float v2)

{

_x = x; _y = y; _z = z; _u0 = u0; _v0 = v0;

_u1 = u1; _v1 = v1; _u2 = u2; _v2 = v2;

}

float _x, _y, _z;

float _u0, _v0; // Texture coordinates for texture at stage 0. float _u1, _v1; // Texture coordinates for texture at stage 1. float _u2, _v2; // Texture coordinates for texture at stage 2.

static const DWORD FVF;

};

const DWORD MultiTexVertex::FVF = D3DFVF_XYZ | D3DFVF_TEX3;

Observe that the flexible vertex format flag D3DFVF_TEX3 is specified denoting the vertex structure contains three sets of texture coordinates. The fixed function pipeline supports up to eight sets of texture coordinates. To use more than eight, you must use a vertex declaration and the programmable vertex pipeline.

Note: In the newer pixel shader versions, we can use one texture coordinate set to index into multiple textures, thereby removing the need for multiple texture coordinates. Of course this assumes the same texture coordinates are used for each texture stage. If the texture coordinates for each stage are different, then we will still need multiple texture coordinates.

18.2 Pixel Shader Inputs and Outputs

Two things are input into a pixel shader: colors and texture coordinates. Both are per pixel.

Note: Recall that vertex colors are interpolated across the face of a primitive.

A per pixel texture coordinate is simply the (u, v) coordinates that specify the texel in the texture that is to be mapped to the pixel in question. Direct3D computes both colors and texture coordinates per pixel, from vertex colors and vertex texture coordinates, before entering the pixel shader. The number of colors and texture coordinates

Introduction to Pixel Shaders 323

input into the pixel shader depends on how many colors and texture coordinates were output by the vertex shader. For example, if a vertex shader outputs two colors and three texture coordinates, then Direct3D will calculate two colors and three texture coordinates per pixel and input them into the pixel shader. We map the input colors and texture coordinates to variables in our pixel shader program using the semantic syntax. Using the previous example, we would write:

struct PS_INPUT

{

vector c0 : COLOR0; vector c1 : COLOR1;

float2 t0 : TEXCOORD0; float2 t1 : TEXCOORD1; float2 t2 : TEXCOORD2;

};

For output, a pixel shader outputs a single computed color value for the pixel:

struct PS_OUTPUT

{

vector finalPixelColor : COLOR0;

};

18.3 Steps to Using a Pixel Shader

The following list outlines the steps necessary to create and use a pixel shader.

1.Write and compile the pixel shader.

2.Create an IDirect3DPixelShader9 interface to represent the pixel shader based on the compiled shader code.

3.Enable the pixel shader with the IDirect3DDevice9::SetPixelShader method.

Of course, we have to destroy the pixel shader when we are done with it. The next few subsections go into these steps in more detail.

18.3.1 Writing and Compiling a Pixel Shader

We compile a pixel shader the same way that we compile a vertex shader. First, we must write a pixel shader program. In this book, we write our shaders in HLSL. Once the shader code is written we compile the shader using the D3DXCompileShaderFromFile function, as described in section 16.2. Recall that this function returns a pointer to an ID3DXBuffer that contains the compiled shader code.

P a r t I V