Tag Archives: gaming

Lightmaps, ESM & Fog of War

Click to see giant version

Progress on Free Company continues at a steady pace here in the shed, unfortunately I’ve been pretty lax about reflecting that progress on the blog but no more for today I come with tales of newly implemented features, bugs fixed and graphical systems steadily improved.

First up is the new fog of war system. I spent a fairly long time with the implementation of fog of war sitting at the bottom of my many & various scrawled to do lists. I knew I wanted it in the game but I wasn’t quite sure how to get it working and running at a decent speed. The first problem is that there was no obvious example to be ‘inspired’ by, most games that I could find using fog of war were either fully 2D or they didn’t combine it with a fully rotate-able 3d camera. I needed a solution that obscured a given hex from all possible angles when none of the mercenaries could see it. I also wanted to be able to have a semi transparent view of areas that the mercenaries had already visited.

Anyway, as you can just about see above I managed to figure it out by using sort of hexagonal cages that are rendered over the top of the level geometry, and then using a complicated blend mode to do the semi-transparent version without showing the sides of all the neighbouring cages. It isn’t quite perfect as there is only a subtraction operation available to do the ‘transparency’ rather than the normal multiply but it works passably enough and most importantly isn’t horrifically slow.

The blend modes look like this:

 HR(g_d3dDevice->SetRenderState(D3DRS_ALPHABLENDENABLE, TRUE));
 HR(g_d3dDevice->SetRenderState(D3DRS_SEPARATEALPHABLENDENABLE, TRUE));
 HR(g_d3dDevice->SetRenderState(D3DRS_BLENDOP, D3DBLENDOP_REVSUBTRACT));
 HR(g_d3dDevice->SetRenderState(D3DRS_SRCBLEND, D3DBLEND_DESTALPHA));
 HR(g_d3dDevice->SetRenderState(D3DRS_DESTBLEND, D3DBLEND_ONE));
 HR(g_d3dDevice->SetRenderState(D3DRS_SRCBLENDALPHA, D3DBLEND_ZERO ));
 HR(g_d3dDevice->SetRenderState(D3DRS_DESTBLENDALPHA, D3DBLEND_ZERO ));

…just don’t ask me to explain them because I did it a month or so ago now.

After I got the fog working I spent a fair bit of time improving the intuitiveness of some of the UI elements so now the sliders and scroll bars work more like proper scroll bars with live updates (change the music volume in terrifying real-time!!) and the buttons have proper embossing so they look like buttons. I also fixed a whole bunch of tiny pixel offset problems with things like the text and the basic ui rectangles that were causing some slight (but noticeable) visual problems.

Next up was implementing a new lighting technique called light mapping. This was a bigger project than I’d hoped at first glance to fix a small visual problem but now it is done and as a result I have a bit more flexibility with lighting. The basic problem I had was that my static geometry (which covers all the walls, floors, shelves and so on) could only support being affected by three lights simultaneously. On older graphics cards I wanted to support there was simply no way to physically pack any more lighting data into the vertex buffers or into the shader instruction count if I switched back to slower dynamic lighting.

At first, I’d tried to alleviate this problem by keeping the lights in any given generated room under three which worked to an extent but inevitably the random generation meant that occasionally a light from a corridor adjacent to a room would  push the number of lights affecting a mesh over three and there would be obvious lighting discontinuities. I tried implementing a range of simple ‘light blockers’ to reduce this problem further but those didn’t really help as they had no way of dealing with a mesh that was lit from more than one side (such as the very frequently used room corners). So, I either had to put up with the lighting discontinuities (they were of variable severity but in the worst case there was wildly different colour lighting and brightnesses on each adjacent wall mesh) or I had to come up with a new lighting scheme.

There are two basic approaches, the modern and the retro. The modern approach involves using deferred rendering for basically everything and is slowly becoming the approach that all modern engines are moving towards as it has the most flexibility and the least disadvantages. Unfortunately, in my case this problem was being caused by trying to support older graphics cards in the first place. It isn’t much of a solution to switch to deferred shading and cut out all those old GPUs which don’t have the necessary oomph to do deferred shading. So I was left with the retro approach, which is lightmapping.

click to show giant version

Lightmapping isn’t an ideal fit for my game because it is principally a pre-computed technique and gets most of it’s advantages from being able to take advantage of known geometry arrangements in the data building stage and then spend as long as it likes crafting really fancy lighting setups for them. However all my geometry layouts are generated on the fly each time the player starts up a new level. I don’t have the time to do a expensive set of ray-traced lighting calculations while a player is sitting there waiting for the level to load. Luckily however, you can make the lighting calculation as simple as you like when generating light maps so I set the dial to ‘super-simple’ and set about getting them actually working.

Lightmapping as a technique actually contains several smaller problems that need solving;

  1. generating light mapping UV coordinates.
  2. packing lightmapping UV coordinates of all the instances proportionally to the surface area being lit.
  3. interpolating the positions & normals of all the mesh instances.
  4. generating the actual lighting data
  5. rendering with lightmapping.

The last part is the easiest, if you’ve done it all right you can just read in your lighting from a texture with your specially generated UV coordinates. The other parts, were not so simple.

For the first part I decided to create my lightmapping UVs as part of my models’ mesh data rather than algorithmically generating them. Mainly because this is one of the few steps I could take ‘off-line’ but also because I, perhaps foolishly, thought it might be easier to make them this way. I used blender to generate my UVs and if you do the same let me give you the most useful tip straight off; the blender ‘lightmap UV’ generation script is pretty much useless for complex geometry. By which I mean any curved surface, if you don’t have infinite space on your light maps you are going to want those curved surface UVs stored contiguously in your lightmap so that the sampler can smoothly interpolate across the surface. The blender script, by contrast, breaks up every face into separate uv ‘islands’ and then tries to pack them in any old order, bah.

Anyway, I also had another problem to overcome with UV coordinate generation, mainly that the only decent .x exporter script I managed to find for blender 2.49 didn’t have any support for multiple UVs and secondly the .x format itself makes it very difficult to work out how to cram extra data beyond the basics into your meshes. Once you do work it out it is excruiciatingly difficult to convert the data into the required (DWORD) format in python. You will need this piece of code:

def convertFloatToDWORD( self, float ):
 pF = ctypes.pointer( ctypes.c_float( float ) )
 pDw = ctypes.cast( pF, ctypes.POINTER( ctypes.c_uint ) )
 return pDw[0]

(from here) if you want to have a chance.

Part 2 of the lightmapping problem wasn’t quite as difficult, I used a very simple rectangle packing algorithm on the basis that that would probably be fastest and scaled each instances UV rectangle by the surface area of the asset calculated during loading. Make sure to keep track of the calculated UVs somewhere as you’ll probably want to pack them into your static geometry when you batch it up.

Part 3 was more tricky and after stumbling around with the semi-missing code at flipcode for a while I hit on barycentric coordinates as the interpolation method of choice which seemed to produce the nicely smoothed normals I was looking for and was a whole lot less code too:

//calc normal
 D3DXVECTOR2 edge0 = faceCorner2uv - faceCorner0uv;
 D3DXVECTOR2 edge1 = faceCorner1uv - faceCorner0uv;
 D3DXVECTOR2 edge2 = uv - faceCorner0uv;
// Compute dot products
 float dot00 = D3DXVec2Dot(&edge0, &edge0);
 float dot01 = D3DXVec2Dot(&edge0, &edge1);
 float dot02 = D3DXVec2Dot(&edge0, &edge2);
 float dot11 = D3DXVec2Dot(&edge1, &edge1);
 float dot12 = D3DXVec2Dot(&edge1, &edge2);
// Compute barycentric coordinates
 float invDenom = 1 / (dot00 * dot11 - dot01 * dot01);
 float u = (dot11 * dot02 - dot01 * dot12) * invDenom;
 float v = (dot00 * dot12 - dot01 * dot02) * invDenom;
 float w = 1 - u - v;
worldNormal = (faceCorner2normal * u) + (faceCorner1normal * v) +
 (faceCorner0normal * w);

So use those for everything interpolation related.

The lighting code I already had, though it is worth bearing in mind that by implementing lightmapping the sum total of your lights will probably be saturated from 0.0 to 1.0 by the necessity of texture storage. Which doesn’t sound like much of a big deal but it can make a pretty noticeable difference when you are summing up the influence of multiple point lights and then multiplying that by other lighting terms in your shader.

Anyway, eventually after a lot of careful hand crafting of UVs that was all finished and now I can have as many lights as I like per room without fretting too much and all the discontinuity artefacts are gone. Aces.

Next up on my lighting refactor mission was the shadow mapping code. It’s been working OK for a while now but has always showed some ‘shadow acne’ at certain camera angles and, worse, the acne shimmered whenever the camera was moving immediately drawing your eye to it. I spent some time tweaking the current code and fiddling with bias values and resolution but no matter what I could never satisfactorily remove the shimmering acne. So, I figured there must be another way by now.

Of course, some careful googling later introduced me to the world of Variance Shadow Mapping (VSM) and Exponential Shadow mapping (ESM). This blog was a great summary of the best places to learn about each technique and really they aren’t dramatically different from shadow mapping. Once you have basic shadow mapping setup in your game it is no more than a morning’s work to try out both VSM and ESM I would recommend everyone struggling with shadow mapping artefacts give it a go and then probably settle on ESM because, at least for me, the light bleeding artefacts with VSM were pretty obvious and just as bad as shadow acne. ESM however immediately worked great and cured my shadows of acne, tedious bias tweaking and shimmering. I do have one difference with the blog linked above in that he mandates keeping the over-darkening parameter to between 0.0 and 1.0, I found by contrast that the original range specified in the nvidia example worked a lot better in my game so don’t be afraid to crank that term up.

Lastly, the past day I’ve been fiddling with improving the SSAO term. I’ve not totally settled on a method yet but so far I’ve replaced my basic box blur with a ‘bilateral’ version that respects normal and depth discontinuities and had a stab at sticking this new fangled FXAA on top of that so it’s jaggy edges don’t completely ruin my lovely regular MSAA rendering. Not totally sure that the FXAA is completely working but eh I might come back to it later.

Anyway, that is probably enough lighting stuff for now as I’ve reached the bottom of the lighting to do list. Next week I’ll likely start by tackling a whole range of bugs and minor polish problem and then it’ll probably be back to either skills & related UI improvements, better AI routines or realtime group movement between battles.