MIP mapping is a basic technique of 3D computer graphics. It is supported by both PC and N64 hardware and is based on the level of detail (LOD). Cite: “LOD is used to determine or change the levels of detail for objects that reside far away or near. Objects located far away can be rendered with much less detail by setting the level, which greatly improves the drawing rate.” MIP mapping is used by many N64 games and often is essential for correct emulation. However, differences in implementation of that technique in N64 make using PC hardware MIP mapping very hard (impossible?):
- On PC all texture's mipmaps are loaded as one texture. Then, texture magnification parameter is set to MIPMAP and further the texture can be used as any other texture. N64 mipmaps are separate tiles placed in texture memory. Particular tile index is a function of the Level of Detail of the primitive. The combiner must be set to blend two mipmap tiles using fraction of calculated LOD. Thus, in order to use PC MIP mapping the graphics plugin must somehow guess that the loaded tiles will be used as mipmaps, build one PC texture from them and then ignore combiner settings replacing them by plain texture usage. It is very indirect and hackish.
- The mipmap levels of PC texture must be consistent. To be consistent, every mipmap level of a texture must be half the dimensions (until reaching a dimension of one and excluding border texels) of the previous mipmap LOD. N64 tiles for different mipmap levels allowed being the same, and it is used in Super Mario 64 for example. ‘Detail’ and ‘Sharpen’ modes make the situation even worse.
- N64 mipmap levels are created for 320x240 screen resolution. The most detailed tile will always be selected on PC due to high level of details at higher PC resolutions. That not always is the desired result.
In my previous article I described the problem with hardware implementation of N64 lighting (shading). That problem is unimportant because software implementation of N64 lighting works perfectly. The software implementation is enough because lighting is calculated per vertex, and calculated vertices are passed to hardware. LOD is calculated per pixel. Thus, it must be done on hardware level. 3dfx hardware did not support custom pixel processing, so I could not implement hardware MIP mapping emulation in Glide64. I made very rough (per-polygon) approximation of LOD fraction calculation, which worked fine in few games and bad in others. The result in general was unsatisfactory, and I waited for a chance to make it right.
At first glance GLSL is a perfect tool for hardware implementation of N64 LOD calculation. Cite a N64 manual: “LOD is computed as a function of the difference between perspective corrected texture coordinates of adjacent pixels to indicate the magnification/demagnification of the texture in screen space (texel/pixel ratio). The LOD module also calculates an LOD fraction for third axis interpolation between MIP maps.” So, the task is to take texture coordinates of the current and previous fragments, put them in the LOD calculation formula and take fraction of resulted LOD. The formula uses ‘pow()’ and ‘log()’ functions supported by GLSL, fraction is also standard operation. The way looked clear, and I started the work expecting fast results. Experienced OpenGL programmers already see the trap on that way. I’m a newbie and my fall in that trap was very confusing: there is no such thing as “previous fragment” in GLSL. The fragment knows nothing about state of adjacent pixels. This is logical. RDP renders one pixel per cycle (or per two cycles in two-cycle mode) and it knows everything about pixels which were drawn before the current one. PC video card has several (many) pixel processors which work simultaneously.
Thus, there is no direct way for fragment shader to get texture coordinates of adjacent fragments. That was a bad news for me. I started to look for indirect way to fulfill the task and after digging in manuals and forums found a solution. Fragment shader can’t get information about other fragments, but it can read any texels from available textures. Pixel’s texture coordinates are two digits, which can be saved as color components of a texture. So, the solution is to draw mip-mapped textures in two passes:
- The first pass uses an auxiliary texture frame buffer and the special shader, which draws fragment’s texture coordinates into the auxiliary LOD texture.
- The second pass uses LOD calculation shader, which takes texture coordinates of adjacent fragments from the LOD texture and then standard shader renders fragment using calculated LOD fraction.
I’m not sure that the found solution is the best possible, as a newbie I was happy to find any working one. I spent two weeks to make it working somehow. Many technical problems made this task hardest among previously done. Nevertheless, hardware emulation of N64 LOD calculation is proved to be possible. Also, I learned Frame Buffer Object (FBO), which will be my best friend in the next long and hard quest: frame buffer emulation.
Peach-Bowser portrait transition. MIP mapping with two mipmap tiles of equal size. |
Perfect dark. Walls and floor textures use MIP mapping. |
Update: neobrain suggested me to use dFdx and dFdy functions to get the "difference between perspective corrected texture coordinates of adjacent pixels" needed to calculate the LOD. The functions really do the job, and now my LOD calculation shader function is simple, straightforward and close to the original N64 one. Also, it works much better than my previous texture-based two-pass implementation. neobrain, thank you very much for that hint!