Sunday, December 25, 2016

Public Release 2.0


Time is come to set new Release tag to master branch. 20 months passed since the first major public release. List of improvements made since that time is huge. Thus this new release gets new major version. Some statistics: since previous public release
* 690 commits to master
* closed over 450 various issues

Fixes in combiner, blender, texture loading, frame buffer emulation eliminated graphics issues, which plagued N64 emulation since the beginning and until recently required use of software rendering for correct emulation. Of course, hundreds of GLideN64 own bugs was squashed on the way. Detailed information about new features of this release can be found in my New Public Release articles.

Public release 1.0 took 8 month of my hard full-time work. It became possible as result of successful crowd-funding campaign on Indiegogo.

Now I 'm working on the project as on my hobby, in my spare time. New major release took 20 months. It could take much more, but now I am not working alone. The project currently has 28 contributors.
My special thanks to:
  • purplemarshmallow: frame buffer emulation improvments, general fixes
  • Francisco Zurita aka fzurita: support of Android port, GLES fixes, general fixes
  • Logan McNaughton aka loganmc10: support of Raspberry Pi port, general fixes
  • Ryan Rosser aka Aquatakat: GUI design
  • matthewharvey: code cleanup and optimization
  • Gilles Siberlin aka Gillou68310: ucode decoding, general fixes
  • AmbientMalice: ini cleanup
Very special thanks to Sergey Povalihin aka angrylion for explanations of various aspects of RDP work. His advices helped me to fix many tricky issues.

Also, my thanks to all users of https://github.com/gonetz/GLideN64. Your bug reports and feedback greatly help us to improve quality of the program.

Speaking of quality: we still have over 200 open issues. Not all of them are bug reports. Feature requests and suggestions are in that list too. Nevertheless, list of known issues is still large. There are games, which are not supported at all. The work will continue.

Internalization: at this moment only two translations are up to date:

  • Spanish, thanks to Víctor "IlDucci"
  • French, thanks to baptiste0602

Other translation files built from translations for previous release, so lots of text left untranslated.


Of course, donations are welcome.

However, there is a problem.
I'm living in Russia and has Russian PayPal account. This account is very limited. I can pay with it, but I can not get money transfers or donations. I did not find another convenient and legal way for donations in foreign currency.

Update: Russian user told me that PayPal is actually a valid option for international donations. Test money transfer proved that it works. I created paypal.me page for convenient transfer:

Also, I have Yandex Money account for transfers in Rubles, see the form on the top.

Saturday, December 10, 2016

Depth buffer emulation II

I already wrote about how N64 games can use frame and depth buffers to create various special effects. Arsenal of tricks is very diverse. Due to difference in architecture of PC and N64 hardware almost any trick requires special support from graphics plugin. I invented almost all frame buffer emulation methods for Glide64 and GLideN64. As a frame buffer emulation expert, I thought that I knew about all tricks used by N64 games. However, during the work on new version I met new class of tricks with buffers. I want to explain some of these tricks while I still remember how I solved them.

Before you start reading, it can be useful to refresh in mind my previous articles about N64 frame and depth buffer emulation:

This article also is about depth buffer emulation; more exactly it is about

Direct rendering to depth buffer

N64 depth buffer is an area in N64 common memory, RDRAM. Depth buffer format is unsigned 16bit integer. N64 color frame buffer is also an area in RDRAM. Most common format for color buffer is 5-5-5-1 : 16bit packed integer with 15 bits for RGB and 1 bit for alpha. When Reality Display Processor (RDP) renders a polygon, it stores color of polygon's pixels in color buffer and depth values in depth buffer. This is the only "valid" method for RDP to update depth buffer. However, game can allocate a color buffer with address of the depth buffer and RDP will be able to render to that buffer as to usual 16bit color buffer. This is how N64 fills depth buffer with initial MAX_Z values before rendering starts: game allocates 16bit color buffer with address of depth buffer and calls fillrect (fill rectangle) command, which fills buffer area with color equal to MAX_Z. Then games switches color buffer pointer to address of the main color buffer and rendering starts.
Depth buffer clear is the most common case of rendering to depth buffer. Until recently I knew about only three other cases, when RDP rendered something to depth buffer directly.
  1. Depth buffer as temporal color buffer. N64 memory is limited, so why not use depth buffer area when we don't need depth compare? Some games use depth buffer area as temporal color buffer when depth is not used, for example on pause screen.
    Mario Golf. Depth buffer is used as aux color buffer to show previous frame.
  2. Depth buffer copy. 'The Legend of Zelda - Majora's Mask' uses many frame buffer tricks. One trick is related to Lens Of Truth (LoT). When Lens is activated, it reveals hidden objects. How it works? First, most of visible objects rendered as usual. When everything is ready, game copies depth buffer to another place. New 16bit color buffer allocated and depth buffer is rendered to it as texture, using BgCopy command. Then game renders fullscreen textured rectangle with Lens texture. That rectangle has minimal depth, so whole depth buffer should be filled with MIN_Z. However, Lens circle has zero alpha, so all its pixels discarded by alpha compare and depth buffer inside the Lens remained intact. Now game renders "hidden" objects. These objects discarded by depth compare outside of the Lens circle. By the way, any "nice" texture for LoT in texture packs breaks LoT functionality: LoT circle must be fully transparent. When rendering of "hidden" objects completed, game copies depth buffer from temporal buffer back to depth buffer, that is restores its state on the moment before Lens texture drawn. Now rest of geometry, which should be above the Lens, can be rendered. For example, show flakes.
  3. Pre-rendered depth buffer. There are games, where 3D model moves over 2D background. 2D background represents 3D environment with various objects, which are just plain picture, but we see them having shape and position on the scene. Our 3D model can be visually behind these objects:
    How it is possible? Obviously, part of 3D model discarded by depth compare, otherwise it would look like this:
    2D objects have no depth. How make the depth compare work there? Zelda uses simple solution: first it renders dummy 3D objects, which corresponds to objects on picture, like this:
    These dummy object allow game to get valid depth buffer. When it is ready, 2D background rendered over and finally our 3D models added:
    There is another famous game, which uses 3D models over 2D backgrounds: Resident Evil 2. However, developers chose another way to get valid depth buffer. For each 2D color background the game has pre-rendered depth background. Each frame that pre-rendered depth background is rendered as texture to depth buffer area. Color background rendered to color buffer, then 3D models rendered over:
    Resident Evil 2 emulated by Glide64
Long time I thought that these are all cases of direct rendering to depth buffer. I was wrong. There are many of them.

NFL Quarterback Club 98

That game have TV monitor on menus. The monitor should display spinning logos. Logo itself is a 3D model. It is rendered to an auxiliary color buffer, which then rendered to TV monitor as texture. Nothing looks hard for hardware frame buffer emulation (HWFBE), but logo did not shown. When I worked on Glide64, I noticed that logo becomes visible when I force disable depth compare. I made a hack for this game, so it works with Glide64 and nobody noticed that it actually works a bit incorrect. For GLideN64 I decided to find why it does not work without that hack. To my surprise, this game also uses pre-rendered depth buffer texture, similar to RE2. Spinning logo is displayed on TV screen, thus no one piece of it must cross bounds of TV display. TV screen is 2D texture of non-rectangular shape, so tools like scissor can not help to cut those pixels of logo, which crosses screen bounds. The game uses 16bit depth texture, which has MAX_Z for texels corresponding the area inside the screen and MIN_Z outside of that area. Auxiliary color buffer with depth buffer address created, and that texture rendered to it. Then aux color buffer for logo selected, and logo rendered with working depth compare.
Ok, I found it, but how to reproduce that with OpenGL? First: I can't render directly to depth buffer. Second: depth texture format differs from format of GL depth buffer. Thus, I added special mode for fragment shader. When in that mode, fragment shader stores calculated color as its depth. Not all mobile versions of GL support it, but on desktop GL it works fine. Plugins renders depth texture, shader passes it to depth buffer and the logo finally works as it should.

International Superstar Soccer 64

This game has old problem with players shadows. Shadows rendered dynamically and look incorrect with most of graphics plugins. This is because the process of shadow rendering is multi-pass and quite tricky. First, the game renders shadow as set of overlapped co-planar polygons. In fact the shadow consist of several thick polylines, created by these polygons. Overlapped polygons would look ugly, but they are invisible! The only purpose to render them is to create shadow silhouette in the depth buffer. When the silhouette is ready, game renders one solid polygon, which covers all the silhouette. This polygon is co-planar with first auxiliary polygons. Special "decal" depth compare mode is used there. This mode rejects all pixels of the shadow polygon, which are out of the shadow silhouette or above other objects on the playground. Result is solid dynamic shadow. This technique does not use direct write to depth buffer, but modification of depth buffer by means of special invisible polygons is very close to the technique I used to emulate Logo in Quarterback Club 98. I should note, that "N64 depth compare" option is required to emulate "decal" depth compare mode.
When I worked on Glide64, I used hack to show shadows: I made invisible polygons visible and removed the final shadow polygon. Shadows were ugly, but much better than nothing:
With  GLideN64 shadows finally look properly:

Mario Golf

Mario Golf has problem with depth compare on some levels:
When user rotates scene with the control stick, problem disappears:

For a long time I could not understand, how it should work. When camera moves, the game renders all objects. At some point, almost ready picture copied to temporal color buffer. When  rotation stops, content of that temporal buffer used as 2D background, and only non-static objects rendered above it, for example model of the golfer. Water is also not static and it is rendered over 2D background. It must be rendered with depth compare to reject pixels, which are hidden by nearby objects. But at this time these nearby objects are not 3D, they are just part of 2D picture. How the game can get depth values to compare? Analysis of log dumps helped to find that not only color buffer copied to temporal location, but depth buffer copied too. When camera not moves, that depth buffer background is copied from temporal buffer to the depth buffer before rendering starts and non-static objects correctly rendered with depth compare. This mechanism is similar to depth buffer copy for Lens Of Truth, but it uses texrect commands for buffer copy, while LoT uses BgCopy. I modified shader-based depth buffer write method invented for NFL Quarterback Club 98 to allow render to depth buffer with texrects and other drawing commands. The problem was solved.

Mario Tennis

The game has problem with VS screen. It could like like this:

or like this:

anyway, one of the players was missing. I started investigation and found, that the game renders two complete scenes, one for each player, to the same frame buffer. Second scene is rendered over the first one, and we see only one player, whose scene was rendered last. Log dump shows that game clears depth buffer before rendering the second scene, so the scene has no depth related glitches. I looked closely to the log and noticed that depth buffer clear works differently for second scene. The game clears depth buffer with MAX_Z as usual, but then it fills part of the buffer with MIN-Z using set of narrow fillrect commands. The part with MIN_Z corresponds to area from the left to the white diagonal line on the first screenshot. Thus, using depth buffer rendering, the game divided the frame on two areas. Area from the left of the white line is protected by depth compare from writing. Second scene rendered on the right and finally the white line rendered to cover junction points between scenes:

Pilot Wings

Shadows for all vehicles in Pilot Wings rendered dynamically. For a long time these shadows remained as one of the most mysterious element, impossible to emulate with hardware rendering. The game is popular, users don't want to see huge ugly black polygons, covered half of the screen.

Thus, most emulators use cheat codes to remove the shadows at all.
I spent many time trying to understand, how the shadows work. When I finally found it, it became clear why it is so hard to emulate. The mechanism is really tricky.
The game first prepares silhouette of the shadow in the depth buffer, as in International Superstar Soccer 64, but this time game renders right to depth buffer. Ok, we already met this. Surprise is that it renders to depth buffer with depth compare! Polygon, which has to be drawn to depth buffer has color and depth. If polygon's depth passes depth compare test, its color stored to depth buffer. That is depth buffer is used as color buffer and as depth buffer at the same time. When silhouette of the shadow is ready, large polygon with shadow color rendered to color buffer. Pixels out of the silhouette discarded by depth compare.
Of course, such tricky mechanism required special efforts to support on PC hardware. The task was successfully solved and the game finally looks as it should:

Sunday, December 4, 2016

New Public Release. Part VI New GUI

Previous topics about upcoming release explained new options and features. The changes illustrated with old-style GUI. That style was designed by Ryan Rosser aka Aquatakat for the first Public Release. New version got many new options and controls. Old design, initially simple and clean, became clumsy and overloaded. Aquatakat suggested new design and recently it was finished. New design aimed to help users select right options. Many available choices have short hints, explaining pros and cons of the choice. As usual, you may point cursor on a control to get detailed tool-tip. For example, this is how Emulation tab looks now:

Previous GUI was translated on seven languages. These translation became obsolete with new GUI. New translation needed. We already have one. Víctor "IlDucci" made Spanish translation:

I hope that translations for other languages also will be ready to Release date. Current translations can be downloaded from GitHub.

Wednesday, November 30, 2016

New Public Release. Part V

Let's continue our review of new GLideN64 release. Our next stop is

Post filters 

tab. It is the last stop for version 1.2 And it is very short stop overall.


It was planned to add many various shader based post filters for GLideN64. However, due to lack of resources only Bloom was implemented for version 1.2 You may read more about it in "Bloom post filter" article. Bloom filter not changed since 1.2

Gamma correction

Version 1.2 has support for N64 gamma correction. However, that was done in a wrong way: as additional color transformation in fragment shader. From architectural point of view it means that gamma correction is performed by RDP. It is wrong because it is duty of N64 Video Interface. That wrong solution caused glitches in frame buffer emulation work, when previous gamma-corrected buffer blended with new one and picture quickly became overexposed. Thus that functionality was refactored, and gamma correction moved on its proper place as post-filter. I also added option to force gamma correction. You may use it if some game looks too dark for you.

On Screen Display

This new tab created for user-requested feature. Users asked me to implement on screen information about plugin's performance. On this tab you may select font and color of displayed message, which counters to show and where to show them. That is all.

Our excursion is over.
Thank you for your attention!

Monday, November 28, 2016

New Public Release. Part IV

Hello again.

Lets continue our excursion. Today it will be short. We will learn, what is new in

Texture Enhancement

As you know, GLideN64 adopted texture enhancement library from Glide64 project. Original library, GlideHQ, created by Hiroshi Morii aka KoolSmoky. It was ported to GlideN64 as GLideNHQ. I refactored some code, removed texture compression from original code, add support for OpenGL formats, add support for new filter xBRZ. GLideNHQ evolution continued after release of 1.2
New features:

  • Updated xBRZ filter with up to 6xBRZ
  • Decrease color banding. New feature made by user request. When enabled, texture passes some pre-processing step, which reduces color banding in resulted filtered texture. Details in feature request #826.
Regarding texture packs: I made several fixes in that code, but all the functionality remained the same as with version 1.2 Font selection controls for on screen messages moved to new tab "On screen display", which will be introduced later.

Thursday, November 24, 2016

New Public Release. Part III

Time to see, what is new in

Frame buffer emulation

Buffer swap mode

This is new option. It controls how often rendered frame buffer will be copied to screen. N64 Video Interface updates screen on each Video Interrupt system call, which happens 50 times per seconds for PAL and 60 for NTSC. Frame buffer data usually updated not as often. For example, NTSC version of Zelda updates frame buffer 20 times per second (20 frames per second or 20 FPS). It looks reasonable to copy frame buffer to screen not on every Video Interrupt, but only when new buffer is ready.

PR 1.2 has only one strategy for buffer swap, which is based on some heuristic. Later it was found that the heuristic not always works good. New heuristics were invented, but they also failed in some cases. Finally I decided to follow N64 Video Interface behavior: copy frame buffer to screen on every Video Interrupt. That solved all known issues with video update. Thus, this option is default one for desktop builds and you hardly will ever need to change it.

That mode increases load on GPU. It is not noticeable on desktop graphics cards, but mobile users began to complain.  Thus, by their request another two buffer swap modes added:

  • on VI origin change: swap when address of buffer to display changed
  • on Color buffer change: swap if plugin knows that buffer content changed
These modes are modifications of heuristic, used in version 1.2 If you have performance issues, try to change buffer swap mode.

Emulate N64 depth compare

That feature described in 'Depth buffer emulation' article. I fixed few issues related to it, but in general it is the same experimental shader based depth compare, as in version 1.2 It still has experimental status and is not recommended for every game. However, several games will not work correctly without it.

Copy auxiliary color buffer to RDRAM

This is another experimental feature. GLideN64 developer purplemarshmallow noticed that some weird issues can be fixed if an auxiliary color buffer created by game will be copied to RDRAM right after switch to another buffer. Example: Mario Artist. purplemarshmallow implemented that feature. Unfortunately, it often brings additional glitches and currently has experimental status. Use on your own risk.

Buffer read/write with emulator help

This is another very promising experimental feature. The problem described in 'Emulation of CPU based frame buffer effects' article. In short: plugin needs to copy color and depth buffer content from video memory to to N64 RDRAM to emulate frame buffer effects. Plugin does not know when N64 CPU will need these buffers, so plugin copies them each frame. That leads to overheads and, sometimes, to crashes. Also, sometimes N64 CPU renders something over frame buffer prepared by RDP. Plugin needs to render frame buffer in RDRAM as texture over its frame buffer to not lose that data. Since plugin does not know, when CPU modified the buffer, plugin need to copy buffer from RDRAM each frame. That often causes garbage on screen.

Developers of 1964 emulator designed an extension for original Zilmar's plugin specifications. This extension is named FBInfo. Plugin provides emulator few new callback functions, which emulator uses to notify the plugin that CPU is about read or write data in frame or depth buffers. This extension, being properly implemented, would help plugin greatly. Plugin could read data from video memory only when necessary and only necessary part of it. Plugin could copy from RDRAM only those pixels, which were modified by CPU.

GLideN64 implements FBInfo extension, strictly following the specifications. Unfortunately, support of FBInfo on emulators side is far from perfect. You may read the whole story in this feature request #808. FBInfo currently supported by three emulators: 1964, Mupen64 and mupen64plus. 1964 implementation does not follow FBInfo specification (sic!), so it works, but slow. Mupen64 supports FBInfo by default, and the implementation follows FBInfo specification. Some games work wonderful with it. Unfortunately, Mupen64 misses some buffers modifications and does not inform plugin about them. This leads to glitches. The worst situation with mupen64plus. FBInfo functionality is heavily broken in it, and not recommended to use.

  • Do not use emulator help. It disables FBInfo even if it is supported by emulator. Set it if FBInfo does not work properly.
  • Read color buffer by chunks. When enabled, plugin follows FBInfo specification: "Notify the dll that the frame buffer memory is beening read at the given address. ... DLL should copy 4KB block content back to RDRAM frame buffer." Plugin will read only chunks of data, explicitly requested by emulator. It may reduce overheads if only few 4KB chunks have to be read. If CPU wants to modify whole buffer, read by chunks will be much slower. Since CPU usually needs the whole color buffer, that option is off by default.
  • Read depth buffer by chunks. The same as for color buffer. However, CPU seldom need to read the whole depth buffer. Usually it needs just few values to probe. Thus, depth buffer read by chunks is default.

Buffer read/write without emulator help

If FBInfo not supported by your emulator, or works incorrect, you may need to use force buffer read/write methods. If you do not remember, why buffer copy is necessary, read again 'Emulation of CPU based frame buffer effects' article.

Copy color buffer to RDRAM

Version 1.2 copies color buffer in sync mode. That is, plugin waits when all buffer data copied from video memory to conventional one. It guarantees that we copy actual frame buffer to RDRAM. Sync copy is slow and not always necessary. For example, frame buffer copy often used for TV monitor effect. TV shows part of the displayed picture. We can use asynchronous buffer reads for such effects. In async read mode plugin asks video card for data, but not waiting the result. Rendering process continues, and video driver performs data copy in background. Next frame the data will be ready for instant access. This data is not for current frame, but for previous one. For TV effect it is not important if it will display the picture with one frame lag. Actually, there are not so many games, which require sync color buffer reads. Thus, in new version async mode is default.

Copy depth buffer to RDRAM

Everything said about synchronous color buffer copy is true for depth buffer copy. Unfortunately, async read does not work for depth buffers. It is possible to read depth buffer asynchronously, but CPU does not recognize that data. CPU needs depth buffer data for the current frame exactly. Again, sync read is slow, especially for buffers above 320x240. When game uses 640x480 internal resolution, depth buffer read causes noticeable slowdown. Thus, I had to adopt my old technology from Glide64 times: software depth buffer render. That dusty code outperforms read from video memory in most of cases (if not in all). Thus, software render is default mode for depth buffer copy in new version.

Render frame buffer as texture

That option enables render data, prepared by N64 CPU, from RDRAM to video frame buffer. It was internally rewritten since version 1.2, but from users point of view it is the same.

Detect CPU writes to the frame buffer

Version 1.2 has that option to correctly emulate games, where CPU can suddenly take rendering on itself, bypassing the RDP. purplemarshmallow found a way to automatically detect such cases, so the option became redundant.

Frame buffer validity check method

Frame buffer validity is most serious problem in hardware frame buffer emulation. The problem described in "Frame buffer emulation. Part I." article. Version 1.2 uses several methods to check frame buffer validity and user can choose, which works better. purplemarshmallow made a large investigation of situations where these methods are applicable and efficient. The code was refactored and now it works transparently to user.

Monday, November 21, 2016

New Public Release. Part II

Today we will continue to learn new features of upcoming Public Release

Now we will compare

Emulation panel

As you may notice, new version has more reach set of options.

Emulate mip-mapping

That important and hard to emulate feature is available since the first Public Release. Several issues with mip-mapping were found and fixed since that, so now it works even better.

Emulate noise

It was good before, so no major changes there.

Emulate fog

Do not worry, fog emulation is still there. However, it is not optional anymore. Now it is part of one very important feature, which works silently under the bonnet of GLideN64 engine and has no user handle. This feature has a long story, worth to tell it to interested emu fans:

Long time ago I told you a story about my fight with N64 color combiner, and how I successfully defeated that problem using magical beans shaders. Similar long story happened with N64 blending. N64 color blending is pretty much similar to OpenGL one. It also mixes input color with color in frame buffer using some blending factors. However, there are differences. If N64 combiner uses 2 cycle mode, blender has 2 cycles too. It is not the main problem though. The main problem is that blender can use two constant colors in blending equations, namely fog and blend colors. Fog color usually used to apply fog. However, it is not prohibited to use it for other purposes. Thus, it is possible to mix output from color combiner with constant colors, thus turning blender to additional stage of color combiner! Surely, many games use that possibility. I could not emulate such combine modes with Standard OpenGL blending. I would be glad to use magical shaders again, but unfortunately blending is a part of so-called OpenGL fixed functionality. It is prohibited to read value from frame buffer, blend it and write back using shaders. In other words, no blending with shaders!

There are GL extensions, which promise to allow shader based blending if some conditions met. Use of them could lead to incompatible hardly working solution. I disliked it. Standard OpenGL blender failed in each case where N64 blender used to combine colors. To fix it, I added special code in fragment shader, which performed necessary additional manipulation with color combiner output color. In fact, these were bunch of ugly hacks. That bunch had tendency to grow. When I met yet another problem, which required special support in code I said enough! I studied all details of N64 blender again and invented general solution to this problem. I wrote special shader program, which uses the same inputs and equations as in N64 blender and which does all mixing and blending except the final one: blending with buffer color. That final mix performed by standard GL blender, with no extensions. Result was wonderful. No more ugly hacks. Several long lasted issues instantly disappear (see #481 for example). Fog emulation is now part of the blender, and thus always enabled. The only drawback is additional load on graphics card, which is noticeable on some old mobile GPU. Thus, I left option to use fast inaccurate legacy blending, but this option is not available for desktop build. You may read more about new blending emulation in thread #970.

Enable Hardware lighting

This option enables per-pixel lighting calculation known as Phong shading. I already wrote about it. It not changed since that time.

Enable shaders storage

This is new feature added by request from users of mobile version. Plugin uses shaders to emulate N64 color and alpha combine equations, as described here. When game uses new combiner, new shader program created. Shader programs compiled and linked on the fly. GLideN64 generates quite large and complex shadres. Shaders compilation on mobile devices can cause stuttering in gameplay. OpenGL allows to get binary representation of linked shader program. That binary data can be stored in a stream (file) and then restored later. When shaders storage option is enabled, plugin saves all shaders in a file. When user starts that game again, plugin loads all previously compiled shaders from that file and further gameplay goes smooth. Of course, if you will met new combiner, corresponding shader will be created on the fly and saved in the storage on exit.

Note: binary representation of shader program depends on GPU and drivers version. Do not try to copy your shader storage to other device: it either will not work or will work improperly.

Use custom per-game settings

The meaning of that option remains the same, but content of custom settings file changed almost completely due to major changes in frame buffer emulation.

2D fixes

Original N64 2D graphics looks good in original resolution, but when you use hi-res output various glitches appear. So, the best way to get perfect 2D is to use original resolution. Set original scale factor to 1 and enjoy. If you playing 3D game, it will look poor in original low resolution. So if you want to get nice hi-res picture, and minimize glitches in 2D, pay attention on the following options.

Fix black lines between texrects

Enabling that option may help to remove black lines between adjacent textured rectangles. The problem and solution described in issue #450. If you see black lines between polygons, try "smart" option. Did not help? Try "force". Still not working? Sorry, there are games with broken 2D even in native resolution, see issue  #936. We are working on it.

Example of black lines removal.

Render 2D in native resolution

This option enabled multi-pass rendering of 2D elements. The problem and solution described in this article  and in thread #800.  This feature greatly improves 2D in many games, but currently it is not the silver bullet to kill all 2D issues.

Standard rendering VS render 2D in native resolution

Next: Frame buffer emulation

Saturday, November 19, 2016

New Public Release. Part I


I have a good news: new Public Release is near!

As you know, GLideN64 is actively developed open source project. Source code modified almost daily. You may build current state at any moment, or download weekly WIP builds. Thus, you may ask: what is so important in new Public Release and how it will differ from WIPs?

Current  Public version 1.2 released 6 July 2015. It was the last version funded by GLideN64 Crowd-funding Campaign Launched on Indiegogo. Since that I'm working on the project at my spare time. Of course, development slowed down. From the other side, open source attracting other resourceful developers to the project, who made many useful changes in code and helped me with ideas or advice. Thus, the work never stopped. Plugin constantly evolved and at some moment I started to feel that all these hundreds of changes deserve to be marked by a new milestone. This milestone is the new Public Release. This time there will be no private beta testing period. My current goal is to fix remaining regressions and clear some stuff. After that I will can say - new release is ready. New translation files also would be nice to get, but not must.

I am preparing set of articles explaining what is new in the upcoming release in compare with the old one. Since most of new features available via users options from user interface, let's compare GUI for both versions.
Note: new version will have new redesigned GUI, but it is more convenient to use current old-style one for comparison.

"Video" panel

Full screen resolution

It is the same.

Windowed resolution.

New feature: custom resolution. As before, you may select predefined windowed resolution from the list. If you need something special, select custom and set windows size in spin-box controls. On the screen shot I set window size to 800x460 to test widescreen mode in window.


This is new feature made by users request. Its purpose is to remove black boarders, which many N64 games add around image. If you do not care about the black boarders, just skip explanation of that feature. ...
So, you want to finally get rid from these black boarders, which eat useful space on your screen. This feature is for you! It works as standard crop in graphics editors: cuts parts of the image from all sides and stretches the rest on entire screen. Simple like that. However, there are several things you need to know about that feature to use it properly.
Crop has two modes: auto and manual. In auto mode plugins tries to detect empty space and remove it. There are many ways to add black boarders. Auto detection method analyses scissor set by the game. Scissor is a standard tool in computer graphics. It defines rectangular area. All pixels outside that area automatically discarded. If scissor is less than frame buffer size, image will have empty boarders, which plugin can crop. Auto crop will fail if game uses other methods to make the boarders. In that case you need to use Manual mode. In that mode you need manually set width for horizontal and vertical areas to cut. Values you provide are numbers of pixels to crop in native resolution. That is, if game's original resolution is 320x240 and it uses borders with 12 pixels width, you need to set 12. That value will be scaled to get correct result in output resolution you selected. If you will set too large value to crop, you may lost useful part of the image. How to find the right value? Either try to increase crop one by one until black borders will be totally removed, or you may make screen shot in native resolution and find boarder width in a graphics editor.

Another note: crop works as post-processing filter. It works only if frame buffer emulation is enabled, as all other post-processing filters. Frame buffer passed to Video Interface, which applies crop if necessary and puts result on screen. I already explained, what Video Interface is and how it works. One important fact is that VI can add its own black borders and it usually adds one-two empty lines. These lines not removed with Crop. Keep it in mind.

Waverace in wide window with auto Crop

Internal Resolution.

This is new feature made by users request. In this version of GUI it is named as "Render at constant factor of native resolution". What it is for? When frame buffer emulation is disabled, plugin renders graphics in screen resolution defined by user. All graphics elements scaled to fit the screen. The scale defined as Screen Width / Original Width; Screen Height / Original Height. It is clear that if game uses 320x240 resolution, but screen has 1024x768 size, the scale will be fractional. There is a belief that fractional scale leads to additional glitches and it is right to use only those resolutions, which are integer factor of the original one. Thus users asked me to support render at constant factor of native resolution. It is not hard to do with frame buffer emulation. Frame buffer can have any internal resolution. Video Interface will scale it to user-defined screen resolution. Low-res buffer will be stretched to screen resolution, hi-res buffer will be shrunk to it. Also, you may set factor to 0. That will set that option off, so internal resolution will be the same as screen resolution.

Two important notes:
  • Factor 1 means "render in original resolution". This is most accurate emulation mode. Higher resolution often causes additional glitches, especially in 2D graphics. Thus, that mode is recommended for pure 2D games.
  • If you use Crop, part of the image will be cut and rest stretched to the screen. You may set high factor of native resolution and get internal resolution above the screen one. Thus, your frame buffer will still be larger than screen area after crop and resulted image quality can be better.

Normal rendering (left) VS native resolution (right)


It remained the same. The only news is that AA is now supported on Android devices with GLES3.1 compatible GPU, thanks to Francisco Zurita efforts. If you want to use GLideN64 on Android, Mupen64Plus FZ is your choice.

Other Video options

All other options remained the same.