Being impressed by the power of GLSL, I decided to implement another new feature: hardware lighting. N64 does all necessary calculations for vertex transformation and lighting in Reality Signal Processor (RSP), which is a part of multimedia Reality Co-Processor (RCP). Thus, it is safe to say that N64 uses hardware transformation and lighting (HW T&L).
3dfx hardware did not support HW T&L, so I could not implement hardware lighting when I worked on Glide64. From the other side, other graphics plugins, which dedicated to work on more advanced hardware also use software lighting calculation. I can't say that no plugins use hardware lighting because I have not read sources of all plugins, but I saw only software implementation of the lighting. I see two reasons for not using hardware capabilities for emulation of N64 lighting:
- It is not necessary. No matter, will you calculate vertices's color in your plugin or your card will do it – visual result is the same. The only difference is in performance. N64 games are low polygonal, and there is no problem for PC CPU to do all necessary calculations. For example, UltraHLE worked full speed on Pentium II processor.
- Lighting is another core feature of 3D graphics, which N64 implemented on its own way. In the simplest case, for lighting calculation we need color of the surface (provided via vertices), normals to the surface in each of its vertices, color of the light and light direction. OpenGL expects that all these components are provided to calculate lighting. Vertex color is blended with light color in proportions defined by the angle between light direction and vertex normal. Pixels get their color values through interpolation of vertices's colors. This is known as Gouraud shading. N64 uses basically the same mechanism, but with essential optimization: N64 vertex structure use the same fields for color and normal. Thus vertex has either color information, when lighting is not used, or normal, but not both. But surface color is still necessary for lighting calculation. N64 does the trick: it provides light color already blended with surface color for the surface it is applied, and RSP calculates intensity of that color for particular vertex. This mechanism is not very suitable for implementation via standard OpenGL/DirectX functionality.
It is not hard to rewrite software lighting calculations with GLSL and get hardware lighting. I could write a vertex shader to calculate vertex color and obtain the same result as with software implementation. It may be interesting as an exercise, but it would give us nothing new. So, I decided to implement per pixel lighting. This is known as the Phong shading model. Phong shading provides more realistic lighting in compare with traditional Gouraud shading, but it requires much more computational resources and usually is not used in games. This picture illustrates the difference between the two shading models:
First, I disabled software lighting calculation and put lights data into OpenGL light structures. Then, I wrote fragment shader, which calculates fragment color using lights and interpolated normal. It works.
What can I say about the result? N64 3D models are low polygonal. From the one side it makes Phong shading easy task for modern cards. From the other side, the models are not smooth enough and the difference between Gouraud and Phong shading is not drastic. Result lighting is slightly more accurate, more distinct:
The old games became to look a bit nicer; my color combiner module became more complex.
There is one side effect of hardware lighting: implementation of cel (or toon) shading is possible, as it is based on lighting intensity calculation. The main idea of cel shading is to set several levels of lighting intensity and make shading constant between the levels. I made rough implementation of cel shading:
Djipi's Cel Zelda texture pack.