Яндекс

Tuesday, September 24, 2013

Hardware LOD

MIP mapping is a basic technique of 3D computer graphics. It is supported by both PC and N64 hardware and is based on the level of detail (LOD). Cite: “LOD is used to determine or change the levels of detail for objects that reside far away or near. Objects located far away can be rendered with much less detail by setting the level, which greatly improves the drawing rate.” MIP mapping is used by many N64 games and often is essential for correct emulation. However, differences in implementation of that technique in N64 make using PC hardware MIP mapping very hard (impossible?):
  • On PC all texture's mipmaps are loaded as one texture. Then, texture magnification parameter is set to MIPMAP and further the texture can be used as any other texture. N64 mipmaps are separate tiles placed in texture memory. Particular tile index is a function of the Level of Detail of the primitive. The combiner must be set to blend two mipmap tiles using fraction of calculated LOD. Thus, in order to use PC MIP mapping the graphics plugin must somehow guess that the loaded tiles will be used as mipmaps, build one PC texture from them and then ignore combiner settings replacing them by plain texture usage. It is very indirect and hackish.
  • The mipmap levels of PC texture must be consistent. To be consistent, every mipmap level of a texture must be half the dimensions (until reaching a dimension of one and excluding border texels) of the previous mipmap LOD. N64 tiles for different mipmap levels allowed being the same, and it is used in Super Mario 64 for example. ‘Detail’ and ‘Sharpen’ modes make the situation even worse.
  • N64 mipmap levels are created for 320x240 screen resolution. The most detailed tile will always be selected on PC due to high level of details at higher PC resolutions. That not always is the desired result.
In my previous article I described the problem with hardware implementation of N64 lighting (shading). That problem is unimportant because software implementation of N64 lighting works perfectly. The software implementation is enough because lighting is calculated per vertex, and calculated vertices are passed to hardware. LOD is calculated per pixel. Thus, it must be done on hardware level. 3dfx hardware did not support custom pixel processing, so I could not implement hardware MIP mapping emulation in Glide64. I made very rough (per-polygon) approximation of LOD fraction calculation, which worked fine in few games and bad in others. The result in general was unsatisfactory, and I waited for a chance to make it right.

At first glance GLSL is a perfect tool for hardware implementation of N64 LOD calculation. Cite a N64 manual: “LOD is computed as a function of the difference between perspective corrected texture coordinates of adjacent pixels to indicate the magnification/demagnification of the texture in screen space (texel/pixel ratio). The LOD module also calculates an LOD fraction for third axis interpolation between MIP maps.” So, the task is to take texture coordinates of the current and previous fragments, put them in the LOD calculation formula and take fraction of resulted LOD. The formula uses ‘pow()’ and ‘log()’ functions supported by GLSL, fraction is also standard operation. The way looked clear, and I started the work expecting fast results. Experienced OpenGL programmers already see the trap on that way. I’m a newbie and my fall in that trap was very confusing: there is no such thing as “previous fragment” in GLSL. The fragment knows nothing about state of adjacent pixels. This is logical. RDP renders one pixel per cycle (or per two cycles in two-cycle mode) and it knows everything about pixels which were drawn before the current one. PC video card has several (many) pixel processors which work simultaneously.

Thus, there is no direct way for fragment shader to get texture coordinates of adjacent fragments. That was a bad news for me. I started to look for indirect way to fulfill the task and after digging in manuals and forums found a solution. Fragment shader can’t get information about other fragments, but it can read any texels from available textures. Pixel’s texture coordinates are two digits, which can be saved as color components of a texture. So, the solution is to draw mip-mapped textures in two passes:
  • The first pass uses an auxiliary texture frame buffer and the special shader, which draws fragment’s texture coordinates into the auxiliary LOD texture.
  • The second pass uses LOD calculation shader, which takes texture coordinates of adjacent fragments from the LOD texture and then standard shader renders fragment using calculated LOD fraction.
I’m not sure that the found solution is the best possible, as a newbie I was happy to find any working one. I spent two weeks to make it working somehow. Many technical problems made this task hardest among previously done. Nevertheless, hardware emulation of N64 LOD calculation is proved to be possible. Also, I learned Frame Buffer Object (FBO), which will be my best friend in the next long and hard quest: frame buffer emulation.
Peach-Bowser portrait transition. MIP mapping with two mipmap tiles of equal size.

Perfect dark. Walls and floor textures use MIP mapping.

Update: neobrain suggested me to use dFdx and dFdy functions to get the "difference between perspective corrected texture coordinates of adjacent pixels" needed to calculate the LOD. The functions really do the job, and now my LOD calculation shader function is simple, straightforward and close to the original N64 one. Also, it works much better than my previous texture-based two-pass implementation. neobrain, thank you very much for that hint!


Wednesday, September 11, 2013

Hardware lighting


Being impressed by the power of GLSL, I decided to implement another new feature: hardware lighting. N64 does all necessary calculations for vertex transformation and lighting in Reality Signal Processor (RSP), which is a part of multimedia Reality Co-Processor (RCP). Thus, it is safe to say that N64 uses hardware transformation and lighting (HW T&L).

3dfx hardware did not support HW T&L, so I could not implement hardware lighting when I worked on Glide64. From the other side, other graphics plugins, which dedicated to work on more advanced hardware also use software lighting calculation. I can't say that no plugins use hardware lighting because I have not read sources of all plugins, but I saw only software implementation of the lighting. I see two reasons for not using hardware capabilities for emulation of N64 lighting:

  • It is not necessary. No matter, will you calculate vertices's color in your plugin or your card will do it – visual result is the same. The only difference is in performance. N64 games are low polygonal, and there is no problem for PC CPU to do all necessary calculations. For example, UltraHLE worked full speed on Pentium II processor.
  • Lighting is another core feature of 3D graphics, which N64 implemented on its own way. In the simplest case, for lighting calculation we need color of the surface (provided via vertices), normals to the surface in each of its vertices, color of the light and light direction. OpenGL expects that all these components are provided to calculate lighting.  Vertex color is blended with light color in proportions defined by the angle between light direction and vertex normal. Pixels get their color values through interpolation of vertices's colors. This is known as Gouraud shading. N64 uses basically the same mechanism, but with essential optimization: N64 vertex structure use the same fields for color and normal. Thus vertex has either color information, when lighting is not used, or normal, but not both. But surface color is still necessary for lighting calculation. N64 does the trick: it provides light color already blended with surface color for the surface it is applied, and RSP calculates intensity of that color for particular vertex. This mechanism is not very suitable for implementation via standard OpenGL/DirectX functionality.

It is not hard to rewrite software lighting calculations with GLSL and get hardware lighting. I could write a vertex shader to calculate vertex color and obtain the same result as with software implementation. It may be interesting as an exercise, but it would give us nothing new. So, I decided to implement per pixel lighting. This is known as the Phong shading model. Phong shading provides more realistic lighting in compare with traditional Gouraud shading, but it requires much more computational resources and usually is not used in games. This picture illustrates the difference between the two shading models:

First, I disabled software lighting calculation and put lights data into OpenGL light structures. Then, I wrote fragment shader, which calculates fragment color using lights and interpolated normal. It works.

What can I say about the result? N64 3D models are low polygonal. From the one side it makes Phong shading easy task for modern cards. From the other side, the models are not smooth enough and the difference between Gouraud and Phong shading is not drastic. Result lighting is slightly more accurate, more distinct:


The old games became to look a bit nicer; my color combiner module became more complex.

There is one side effect of hardware lighting: implementation of cel (or toon) shading is possible, as it is based on lighting intensity calculation. The main idea of cel shading is to set several levels of lighting intensity and make shading constant between the levels.  I made rough implementation of cel shading:
The result does not look very cool. May be it can be tuned. Also, it should look better with specially made “cell-shading” textures, e.g with Djipi's Cel Zelda texture pack.