Backface Culling - Used to draw only
triangles that face the camera and increase drawing performance.
see Winding Order. Only triangles that face the camera need to be
drawn most of the time. The winding order can be used to determine
which visible triangles of the mesh face the camera. Backface
culling is set to either clockwise, or counter clockwise to
determine which side gets culled, or dropped out of the drawing
process. If it is turned off, the triangle will be drawn whether it
faces towards or away from the camera and regardless of whether it
is on the inside or outside of a model. If the model is fully
enclosed it is impossible to see the inside, and therefore you don't
need to EVER draw the insides of objects. So, setting back face
culling to cull out one side means that either the inside or the
outside will not draw and increases drawing performance because
there is less drawing going on. Take a cube for example, if the cube
is a crate in a scene, you may never need to draw the inside. If on
the other hand, the cube IS the room then you ONLY need it to draw
the inside of the cube, since the camera is on the inside and will
never see the outside. You define "outside" to be a winding
direction and then cull the side that you don't want drawn.
Content Pipeline - The process to get 3D
models into the memory of your game at run-time. This may sound
simple, but in reality this is extremely complex. Part of the
Content Pipeline might be the process that the artists go through to
create, rig, and animate the models before they even get to your
program. The content pipeline also include sound, music, images,
movies, and any art asset that must get into the game. Once artists
put these pieces of data into a file, they can be loaded into your
program. However, there are many many file formats and features and
just the import of these files alone may be a major ordeal. Once in
your program, the data may little resemble the original file it was
taken from. You may want to save the files into your own custom
format to get rid of all the garbage being loaded for all future
load operations.
Graphics Pipeline - The steps needed to
convert vertices and their attached data into pixels on your
computer screen. Don't get intimidated by this term. It's not
that big of a deal once you know what the steps are to convert
vertices to pixels on the screen. Basically, you draw your model in
your modeling program, like Blender, and store the vertices along
with other data in a file which is then loaded through the content
pipeline into memory. See vertex to get some idea of all the data
that can go into a vertex. This process CAN get pretty complex with
custom programmed shaders. However, the basic steps are: place the
vertices of the model into the 3D world using a world matrix.
Simulate a camera by combining a view matrix with the world matrix
of each object in the scene. Project the 3D world to a 2D drawing
surface. These matrices are often combined and the math in them
applied to the vertices as the first step inside the vertex shader.
A programmable vertex shader can be used to manipulate the vertex
information further and to change it; this can be an extremely
complex step. The converted vertex information is then passed to the
rasterizer that creates pixels between the corners of the now 2D
vertices to shade in the triangle. Each of these pixels is passed to
the pixel shader where a pixel shader program can do calculations to
determine the color of the pixel that will show up on the screen.
Originally, the Graphics Pipeline was all hard wired into the video
card and known as the Fixed Function Pipeline. Eventually, a
Programmable Pipeline was introduced, programmed with Shader
(machine) Language. High Level Shader Language and GL Shader
Language are used to program the programmable stages of the
pipeline.
Index - Lists the order that vertices
should be drawn in. Vertices are used to draw lines or
triangles. A list of indices specifies what order to draw the
vertices in and allows vertices to be re-used to draw multiple
triangles or lines. Indices also allow the vertices to be drawn out
of order. Using indices is optional, but they are almost always used
because of the greater efficiency they provide in drawing using
vertices.
Left Handed Cartesian Coordinate System - The Z axis points into the screen.
When you define an X an Y axis in 3D, there is no universal truth
that guides whether the positive Z direction is forward or back. The
decision is completely arbitrary, and unfortunately everyone decides
to do it different, although many will allow you to set it. DirectX
and thus XNA use a left handed coordinate system. I don't think
there really is a good reason why it is called "left handed".
Matrix - Think of it as a "black box" that
holds math. If you want a formal definition, go study Matrix
Algebra which is often taught in a Linear Algebra course after you
finish Calculus. However, matrices are actually simple enough that
they are often taught to middle school children and man-kind has
been using them since the ancient Chinese. They are generally
implemented as an array and the difference is that matrices are used
in a special way. You load them up with math formulas and they store
the results of all the combined formulas that you put into them.
Generally, they store an orientation (or rotation) in 3D space, a
position in 3D space, and scaling (or size) information. You
multiply them together to combine them and the order that you
combine them in will result in different results and therefore is
imperative. There are generally four different types of matrices
used: the Transformation matrix, the World matrix, the View Matrix,
and the Projection Matrix.
Normal - A vector with a length of one that
represents a direction without an amount. Any vector can be turned
into a normal by "normalizing" it. Normalizing a vector means
setting its length to one. They are used to keep track of directions
in 3D space and especially for calculating lighting in High Level
Shader Language (HLSL). If you multiply a vector times a scalar (a
plain-ol' number) it will multiply the length/amount of the vector
by that amount. So, if you multiply a normalized vector times a
scalar value (plain-ol' number) it will "set" the vector length to
that scalar number without changing the vector's direction. You can
set the amount of any vector by normalizing it and then multiplying
it times the amount you want and this will not change its direction
unless you multiply it by a negative which will reverse its
direction 180 degrees while setting the amount to the number you
multiplied the vector by.
Object/Model Space Coordinates - The
coordinates, or positions, used when you created the model in your
modeling program. Read World Space Coordinates first. Again,
don't make it more complicated than it has to be. This is just how
the model was created in the modeling program. It will be the exact
same in your game world if you don't position, orient, and scale it;
once you do, it will be said to no longer be in object space anymore
but you don't change the original coordinates. No one EVER moves the
coordinates/vertices in a model. What you do instead is apply a
world matrix to ALL of the vertex positions in the model to place it
in the 3D world. So, to sum it up: your model is in "object/model"
space unless you do something to change that. HOW you change that is
by applying a world matrix to it to position, orient, and scale it.
The only difference between before and after is the object's world
matrix.
Origin - The center of the game world.
On a Cartesian axis, the origin is where the X and Y axes cross and
where both equal zero. In 2D space, this is where X and Y both equal
zero. On a computer screen, the origin is generally the upper left
corner and all screen coordinates are positive. In 3D space, there
is no such thing as a standard screen and the origin could be
anywhere on the screen. However, in 3D space the origin is the
center of that 3D world. It is where the X, Y, and Z axes meet at
X=0,Y=0,Z=0.
Pi - The relationship of the distance
across a circle to the length of its outer edge. All circles in
the universe have an outer edge, known as it's circumference, of
exactly Pi times it's distance across, also known as the circle's
diameter. This is one of only a hand-full of the key truths that
trigonometry is built on. Because this is a universal truth, if you
can measure a circle's diameter, you always know it's circumference
no matter whether the circle is the size of an atom or the size of
Pluto's orbit in space. Pi is used with radians, which are related
to the circle's radius. A radius is half of a diameter, therefore
(2*r)*Pi, which is usually written as 2*Pi*r, is the number of
radius's you have to travel to travel all the way around the outside
edge of the circle. Notice that the 2 is there to convert radius's
into diameters and not related to the number Pi. Pi is very roughly
3.14159265359, but you can look it up to get a more accurate version
of Pi. Pi is infinite and so any version of Pi is an approximation;
it's only a matter of how precise you care to be.
Pipeline - Process. Yes. It's that
simple. I really hate the word "pipeline" because it sounds so
technical and complicated and when people start talking about
something like the "graphics pipeline" or the "content pipeline" it
just sounds so intimidating and technical. But a pipeline is just a
process. I think the term came from the idea that something could
not avoid a step in the process or loop back in the process,
although today both are common in most pipelines. See graphics
pipeline and content pipeline.
Polygon - The
triangles that makeup everything in your 3D world. Technically
a polygon is a multi-sided shape. Often referred to as a "poly".
Quads, or quadrilaterals, are technically polygons even though they
are 4 sided. But so far, I've never seen a graphics card that knew
how to draw anything other than a line or a triangle. (I think they
can also draw points and they used to be able to draw sprites but
now graphics cards just use two triangles to form a quad and draw
the 2D sprite on the 3D quad.) Low-poly models (a relative term) are
desirable because the more triangles in the model, the more computer
resources it takes to draw it.
Projection Space (Screen
Space)
Coordinates - Coordinates converted to be on your flat
computer screen (or back buffer). All of the objects in your 3D
world have to be drawn on your flat 2D computer screen. Once the
coordinates or vertices of your objects are converted to be used on
your 2D screen they are in "projection space". Generally, you apply
a world matrix to an object to position it into the world, a view
matrix to move the entire 3D world to simulate a camera, and a
projection matrix to project the world onto a 2D image for drawing
to the screen. The way this is done is by multiplying the world,
view, and projection matrix together (often inside the shader) and
then multiplying every vertex position by the combined result. This
converts all vertices to the flat 2D screen while simulating
position the objects and positioning the camera.
Radian -
An
angular measurement that is the number of circle radius's around the
outer edge (circumference). The length between a circle's
center and it's outer edge is the circle's radius. A radian is the
length of one radius. 2.5 radians is the length of 2.5 radius's.
However, this is an angular measure. It's used the same way degrees
is used. If you say 180 degrees you know how far around the circle
that is, but you don't know what distance it is. If you say Pi
(3.14159265359) radians around the circle, you know that that is the
distance around the circle of 3 radius's...except you don't know the
length of a radius unless you know the length of the radius... just
like degrees. So, it doesn't really tell you a length; it tells you
a percentage around the circle, just like degrees does. You have to
do additional math to determine the actual distance around the outer
edge of the circle. Pi radians, or Pi radius's, is half a circle
regardless of how big or small the circle is or what it's radius is.
Pi, by the way is basically 3.14159265359 although it's actually an
infinite number. See Pi. So, if Pi radians is half a circle, 2*Pi
radians is a full circle. 1/2*Pi radians is a quarter circle or 90
degrees. 1/4*Pi,or Pi/4, is an eighth of a circle or 45 degrees. You
can multiply it to get something like 3 quarters: 3*1/2*Pi = 270
degrees. Computers like radians. The sooner you can learn to use
them the better and the secret is learning to relate them to Pi.
Right Handed Cartesian Coordinate System -
The Z axis points out of the screen. When you define an X an Y
axis in 3D, there is no universal truth that guides whether the
positive Z direction is forward or back. The decision is completely
arbitrary, and unfortunately everyone decides to do it different,
although many will allow you to set it. OpenGL uses a
right handed coordinate system. I don't think there IS a good reason
why it is called "right handed".
Rotation Matrix - Matrix used to rotate an
object. Generally, you load up a matrix with a rotation around
a single axis. This is because the rotation formula is a 2D formula
and you apply it once to all three axes to do a 3D rotation.
Rotations must always occur at the origin, or the results will be an
orbit around the origin instead. This is because the matrix is
applied to all vertices in the model. So you are actually rotating
the vertices, not the model itself. You combine (using
multiplication) a rotation matrix with the object's world matrix in
order to rotate an object. The order of the multiplication is
imperative. I believe you multiply the world matrix times the
rotation matrix to rotate around the object's local axis (but you
still have to do that at the origin). Multiplying them in reverse
order will rotate around the world's axis and often give you
unexpected results. If it rotates funny try the other order. If it
orbits you are not doing the rotation at the origin (the secret is
to move it, rotate it, and then move it back before you draw it).
Shader - The process that draws stuff to
the screen. Back in the old day, shaders were built into the
graphics card itself. This made things easier because you could just
tell the shader to draw stuff. Eventually, someone realized that if
they could make the drawing capabilities of the graphics card
programmable, it would GREATLY increase the abilities of the
graphics card. Before, they had to hardwire every thing into the
graphics card itself. You basically had to make a new graphics card
to change anything and this greatly limited the functionality. So,
originally, they made the part of the graphics card that passes the
vertices to the rasterizer programmable; this is called a "vertex
shader". The rasterizer shades in the inside of the triangular
polygons that are used to draw EVERYTHING (the vertex shader
projects everything to 2D and it's a 2D image by the time it gets to
the rasterizer). The individual pixels that the rasterizer creates
are passed to the "pixel shader" which can alter the color of every
pixel in some extremely complex ways. This is a "programmable shader".
Starting with DX10 and OpenGL 4.0 (I believe), they added additional
stages in the graphics card that you can program. Now there is a "tesselation
shader". Often, when people refer to shaders they mean a High Level
Shader Language or GL Shader Language program that runs on the
graphics card that may contain any combination of the shading stages
mentioned.
Translation Matrix - Matrix used to move
something. Generally, translation is done with a Translation
matrix; you load up a matrix with the translation and then combine
it with the object's world matrix to move the object. Translation
changes the position of the object when applied to the object's
world matrix. If the object's world matrix ONLY contains a
translation, then the translation will position it to that spot.
Otherwise, it will move it by that amount. It could be thought of as
an positional offset from the current position that was already
stored in the object's world matrix which places the object in the
3D world.
Vector - An amount that is permanently tied
to a direction. Vectors are generally thought of as arrows in
game programming. The length of the arrow is it's "amount" and the
direction that the arrow points from its tail to its head is it's
"direction". Vectors in game programming are always assumed to have
their tail at the origin; this is done for efficiency and to
simplify the math; it is also common to do this in mathematics. The
position that is stored in the vector object is the position of the
vector's head. The position of the vector is irrelevant because it
only represents an amount and a direction and nothing else. However
- quite confusingly, positions and such are often stored in vector
objects even though they are not truly vectors, but this also tends
to simplify the math and code. A normalized vector has a length of
one and is generally considered to contain "no" amount since
multiplying by one results in the same thing. A normalized vector,
or normal, represents a direction with no amount.
Vertex - The point where lines connect.
When you first start out, you may be inclined to think of them as
points in 3D space; they are not points even though they always
contain points. You might best think of them as objects that contain
information for drawing, whether you are drawing lines or triangular
polygons. Normally, you will be drawing triangular polygons and you
can think of them as objects that define the corners. However, the
almost always contain more information than just the position of the
vertex. They can contain a color and the polygon will shade the
triangle according to the colors in the 3 vertices of the triangle,
which all can be different. They usually contain UV Texture
coordinates to facilitate painting the object using a texture. They
usually also contain normals which store the direction the vertex is
"facing". This is usually an average of all the normals of the faces
connected to that vertex. The normals are used for lighting
calculations. All of this data goes into each vertex and so vertex
size in memory can vary greatly depending on what data is in the
vertex. Vertices are often connected by indices (see index).
View Space (Camera Space) Coordinates - The coordinates of
an object after the entire game world is moved to simulate there
being a camera in the 3D world. Again, people try and make this
overly complex (see World Space Coordinates). Your models are
positioned into the 3D world using their own private world matrices.
However, at that point you still don't have a "camera". The camera
would be permanently locked at the origin looking straight down the
Z axis. In order to simulate a camera, you move EVERYTHING in the 3D
world. This may be a little confusing but it's Newton's theory of
relative (which is: everything is relative to the way you look at
it). Whether the camera moves through the world, or the world moves
around the camera, the end results are the same. If you look up in
the sky, you would swear the sun revolves around the earth, although
we know it's actually the exact opposite of that. By moving the
entire world around the camera, you simulate a camera moving through
the 3D world even though actually the camera is still at the origin
looking down the Z axis. And when you realize that the image has to
be drawn to your flat computer screen you'll realize that it pretty
much has to be done this way. Anyway, view space is once the object
has been mathematically shifted to simulate the camera. Applying the
view matrix does this.
Winding Order - Order that the vertices are
drawn in and how that affects which side is the "outside" of the
polygon. Whether you use indices or not, you specify the
vertices to be drawn in a specific order. Every, triangle in the
mesh is drawn out of three vertices that are drawn in an order. On
one side of the triangle the vertices will have been defined in
clockwise order and on the other side they will have been drawn in
counter clock-wise order. Without backface culling, both sides will
draw.
World Space Coordinates - Position in your
3D world. I get a little annoyed with all this "world" space,
"object" space, "model" space stuff. This is really simple and
everyone has to make it complex. Basically, your modeling software
builds your model around certain origin (0,0,0) point. When you
bring it into your game world it will STILL have those coordinates.
It turns out that you may not want all 4,000 of your game objects to
ALWAYS stay in the exact same position, scale, and orientation that
you created them in. So, you position, orient, and scale them in
your 3D world. THAT is world space coordinates. Don't make it harder
than it has to be: world space coordinates just means the
coordinates in your 3D world once stuff has been placed in the 3D
world. See Object Space Coordinates.
Summary..
I thought it might be handy to have a dictionary that defines terms
for beginners. I usually try to explain things pretty thoroughly, but I
think it might be helpful to just have one central location where you
can lookup various words.