In 3D gaming, polygons are everything. Much has changed since the early days of 3D gaming, but if you compare an early 3D game (like Doom) to a current game (like Crysis 3), the biggest difference you will see is the number of polygons on the screen.
In the early days of 3D gaming, increasing the number of polygons (called the “poly count”) was simply a matter of throwing more transistors on the GPU and ramping up the clock speed.
For awhile, this worked just fine, and we got games with higher and higher poly counts. But eventually, hardware designers hit a point of diminishing returns, and poly counts were unable to keep up with demand. So they started to get creative.
Um . . . I’ll take the one on the right.
The first thing that these engineers did, and it was paradigm-changingly huge, was creating “shaders.” A shader is a sort of “mini-chip” on the die of a graphics processing unit (GPU). Shaders handle very simple equations, but address huge problems (like reflective surfaces, for example). Originally, shaders were fixed function (meaning they could only do a set number of things), but eventually fully programmable shaders (meaning they can be coded to do whatever the game designer dreams up) found their way onto video cards. The first video game console to have programmable shaders was the original Xbox, to give a time of reference (the PS2 and GameCube—and Wii—had fixed function shaders, for the curious).
Oh Microsoft, we thank thee for giving unto us the first game console with programmable shaders . . .
The way in which shaders fixed the polygon problem without fixing the polygon problem (er) was by making light act as though there are more polygons to a model than there really are. This is done with a variety of techniques, such as bump mapping, normal mapping, and parallax mapping. Let’s consider normal mapping. The artist for the model in question begins by designing a model with an impossibly high poly count. He then uses that model to create a normal map. Normal maps are grayscale “textures” that tell light how to reflect off the model. The artist now reduces the fidelity of the model to something more manageable. When the model is put into a game, the normal map runs through a shader, which tells light to reflect off surfaces that aren’t really there. Neat!
It isn’t the prettiest job, but it really displays normal mapping well.
The problem with all of these techniques is that when you get close to the model, you can tell that it isn’t really as high poly as it appears from a distance. The newest hardware and APIs (DirectX 11 via Shader Model 5.0 & OpenGL 4.0) support a new technique called “hardware tessellation.” Before anyone accuses me of spreading misinformation, I know that ATI video cards had tessellation years ago (called TruForm). TruForm never made it into DirectX (it was available in OpenGL), and it was different feature-wise, so I’m treating the new tessellation technology differently.
The way tessellation works is simple: The artist creates his high poly model, reduces the poly count, and ships it in a game. Tessellation “creates” extra polygons, smoothing out the overall model. ATI’s old implementation of tessellation did this quite well, but it was indiscriminate: regardless of the model, it was getting tessellated, dammit! This was great for soft models, like faces, but bad for hard models, like chiseled rocks.
TruForm is on the left, untessellated is on the right. Notice two things: 1) the tessellation on the face is a huge improvement, and 2) the tessellation on the hands makes the fingers look like slightly damp sausage.
The new Shader Model allows for programmable tessellation. Making a rock? Just disable tessellation. Making a person? Crank tessellation to the max.
They say a picture is worth a thousand words, so I’ll let the following images sell you on hardware tessellation.
Remember, you can get this on the PS4, (probably) the Xbox Infinity, and a PC with reasonably current hardware and at least Windows Vista! Buy or pre-order today
Another great article from wololo thanks