Normal Mapping or Dot3 bump mapping technique is commonly used in video games for adding increased detail to polygonal surfaces. I've never been able to fully grasp how they work until I started to study them in greater detail.
I've seen normal map tutorials that taught how to use them, but they never explained the process in a way that taught me how I could see through exactly how they worked. I wanted to understand how light is calculated using normal values represented on an image.
Below is a diagram of DOT3 bump mapping explaining it for what it is and how it works. The rest of this tutorial goes into supplementary detail and theory.
I didn't want to just use normal maps, I wanted to understand how they work. Every time the subject came up, I was thinking of normal maps as pinkish and bluish images that looked odd. The colors didn't make sense. I decided to investigate for myself. Below are the images I have come up with that serve as good examples of explaining normal mapping.
A half sphere in 3D space |
Normal map texture data in 2D A perfectly round sphere is a good starting point for understanding normal maps. Here, RGB spectrum is used in a very clever way, to represent XYZ coordinates of per-pixel normals. Although this is a 2D image, it uses RGB values to hold 3-Dimensional data, instead of color. R = X and G = Y. The Z is pointing directly at us in the middle of the normal map. When that's the case, it creates a neutral light purple color, also seen all around the edges of this normal map. |
Imagine that the half-sphere on the left example is a fully 3D sphere with high detail level, made up of tiny triangles, making sure that the surface of the sphere is completely smooth. This is a good starting point because this half of a sphere represents a perfectly round object. Not all normal-mapped objects will be as perfectly round as this sphere, but this gives us an idea of all minimum and maximum R, G and B values that can possibly be used.
And here is how the RGB colors are mapped to the XYZ normal coordinates:
R = X |
G = Y |
B = Z |
RGB is mapped to XYZ respectively (R=X) with minimum and maximum values between 0 and 255. The light purple color around the sphere, as well as in the exact middle is represented by values where r/x = 127, g/y = 127 and b/z = 255. It is the neutral color, in this location, the normal is pointing directly at us. In case of the image below, it would be the normal vector pointing straight up (r=127,x=0.5).
We know that RGB goes from 0 to 255 for a total of 256 channels for each of the 3 colors. When this data is looked up to use as a normal, it is converted to range between 0.0f and 1.0f as shown on the example image.
Here I have stripped out the red component of the RGB spectrum, to make it easier to understand. But picture this working in all 3 RGB components, or in other words on all 3 axis: X, Y and Z. When we do that, the weird pink/blue colors are generated on the normal map.
What about values that go underneath the half-sphere? We're not concerned about them. Normal mapping is calculated from the point of view of a camera, looking down directly at the normal map. It makes no sense calculating light for pixels that are turned away from the camera, which is pretty much everything below the horizontal line on the image seen here.
When calculating these coordinates, the camera is theoretically placed right above the vertical arrow which is pointing directly up (r=127,x=0.5). The camera itself is looking down at the arrow. Only from this vantage point, the normal map values make sense. All values stored in a normal map are based on this principle. This way we get a full range of normals, except ones pointing away from the camera, which we wouldn't need anyway. Of course, we are not talking about an in-game camera here. But only the camera that is theoretically used to calculate the normal map from this angle. In a real-world scenario, the camera view can be arbitrary, but at this point, the normal map coordinates are already pre-recorded for each normal map texture.
You have probably seen this range of 0.0-1.0 used when specifying texture coordinates in OpenGL. This same range is also used for normal map calculations. Basically it means 0.0 is the lowest possible value whereas 1.0 is the highest. Dividing 1.0 by 255 we get 0.003f. Which is the minimum stride between each pixel (fragment). More than sufficient enough for realistic light effects on per-pixel (per-fragment) basis.
Normal maps are merged together with the object's texture to produce realistic lighting rendering. But they really shine (no pun intended) on low-poly models, by faking higher-resolution detail without increasing polygon count. These normal values modify how dark or bright a pixel is depending on the angle between the surface, the pixel value on the normal map specifying an angle of direction, and the position and direction of the light source.
© 2014 OpenGL Tutorials.
Built with Tornado PHP Framework with template | Web design by Web Design by Greg.