Tuesday 13 August 2013

There's no other way, there's no other way

Blender is a fairly nifty, free 3D modelling application. The learning curve is a little steep, but that's the price you pay for the vast number of features. This is what I've been using to make some models for Offender.

It's also got a built-in Python interpreter, and that, gentle reader, is what I want to talk about today. Python scripts can be used to script various tasks, and can directly access Blender's data through the plug-in API. This is just what I need to export meshes in a form that I can hook up to my C++ code, so it's time to learn Python.

Great, another language to add to the confusion. It's not just for Blender though, Python is also the language of choice for the Raspberry Pi - it's what the official GPIO API is written for. Fortunately I've already got 15 years on-and-off experience with Perl and 3 or so years of dabbling with Ruby, so it should be a seamless transition, right? Well, at least I'm used to interpreted languages and stuff like duck-typing.

Third time lucky
Let me start, as is the custom, with "Hello, world!". The picture shows the Windows Python shell and my first attempts at getting one line of code working. I've found my first real annoyance here - Python 2 and Python 3 have different syntax for print statements among other things. My version of Blender uses Python 3.3.0, a lot of the tutorials are for Python 2. Great.

Next up, a simple sequence generator - let's go with Fibonacci, as it's pretty simple and there are usually multiple ways of doing it. Putting the proverbial sledgehammer to it:

it = 0
v = [1,1]
while it < 200:
  print v[0]
  tmp = [v[1], v[0] + v[1]]
  v = tmp

  it += 1

Indentation instead of braces? Ugh.... I'm all for code readability, but I'm not a fan of enforcing it through syntax. And there's another discovery - no "++" operator, you have to use "+= 1". Initially it seems annoying, and it means you can't do things like "array[i++]", but after reading around a little it makes sense. Python rejects the Perl paradigm "there's more than one way to do it", and instead says "there should be one - and preferably only one - obvious way to do it". But more importantly it's just not necessary most of the time. The code above is a fair illustration of this, instead I could have done away with the while loop and used range.

I experimented with lots of different ways of implementing this, seems there is more than one way... Eventually I settled with this as a final attempt:

v = [1,1]
while len(v) < 200:
  v.append(v[-1]+v[-2])
for elem in v:
  print elem


I've put it on codepad - a great online interpreter - so you can see it in action if you feel so inclined. But it's not about the result really, it's about learning the syntax and buying into the paradigm/philosophy.

Next, write an export plugin for Blender. Seems like a big step, but how much Python do you really need for that? It's basically just array manipulation and file I/O - bread and butter for scripting languages - the most complicated bit's learning how to use the API. And there I'm having the same problem I had with Python - the API has changed dramatically over recent versions, so a lot of the tutorials and existing plugins don't work. In fact some of the API tutorials use a 2.x version of Python. In general code isn't backwards compatible at all, though with a bit of intelligence you can convert most of it without too much difficulty.

By default, recent versions of Blender store meshes as polygons of arbitrary size. Unfortunately OpenGL's streamlined API can only draw triangles, so you need to convert those to triangles. Triangulation is a well-known problem, it can get quite complex if you can't assume convex polygons, but thankfully Blender can do the hard work for you and convert everything to what it calls tessfaces, which are triangles or quads. Quads are fairly trivial to triangulate, assuming they're all convex for now. There's also a separate Blender API called BMesh, but that's mostly used for importing and creating meshes rather than export.

Perfectly shaded object in Blender becomes...
I've set up my export plugin as a panel on the thingy that comes up when you select an object, with a textbox for a class name and buttons to export a .h and a .cpp file. I tried combining them into a single button which would export both, but I couldn't get that to work. My output is a huge array of triangles, which works OK. The thing is that graphics cards prefer triangle strips or triangle fans to discrete triangles, but I can't see any way of extracting strips from Blender. Someone will have come up with an algorithm for building strips, but I've not found a suitable one yet. I have discovered that optimally dividing a mesh into triangle strips is in np-complete problem, but I'd settle for any sub-optimal solution that's better than discrete triangles.

... dodgily shaded object in game
My exporter isn't ideal and could do with a lot of improvement, but it works so I've been able to put something into the game. I've also added specular highlights, but they don't quite look right. I think it may be because I'm still doing Gauraud shading, and switching to Phong shading might make it look better. Though Phong shading everywhere would be nice, it's a lot more demanding on the GPU as you have to do calculations for every fragment instead of every vertex. Probably not an issue on my monster desktop, but more of a concern on Raspberry Pi.

Next I'd like to move on to getting some kind of physics model set up. I've continued to read up about aerodynamics, and one thing I discovered was that a real-world flying saucer prototype did exist - the Avrocar, which utilised the Coandă effect to generate lift. Unfortunately it proved to be too unstable, could only get a few feet off the ground, and wasn't very fast either, so it was discarded as dead-end technology.

The problem with a flying saucer is that the shape is inherently unstable. With no tail it's essentially just a wing, albeit a funny-shaped one. The aerodynamic centre of a wing is about a quarter of the way back from the leading edge - as the leading edge of a flying saucer is curved it's likely to be a bit further back, but certainly still forward of the centre of gravity. That leads to a negative longitudinal static stability, i.e. the aerodynamics tend to make a small deviation in pitch worse, rather than correcting it. I suppose I could rationalise some alien technology which stabilises it? Making it a half-disk might work. I kinda like the idea of making something which looks like a classic flying saucer, but cut in half with massive engines slapped on the back. Or maybe I'll just fudge the physics, as accurate aerodynamics are going to be nigh-impossible anyway.

My biggest problem at the moment is focus. My chum Thom suggested clouds for better context, and that set me off on a journey of discovery. I've been reading about Perlin Noise, which can be used to make nice fluffy clouds and is often used for terrain too. Volumetric clouds seem to be the way to go, which lead me on to volume rendering, a completely different 3D paradigm to the usual polygon rendering methods. I could use the same method for explosions too. Then I started wondering about water that ripples, is reflective and refractive, crepuscular rays, lens flare, and so on and so on. What I'm coming to realise is quite how bleedin' awesome shaders are. I've barely scratched the surface of what vertex and fragment shaders can do, let alone more recent innovations like geometry, tesselation and compute shaders. Effects which were previously only possible in pre-rendered movie sequences are now possible in realtime. The possibilities are nigh endless, and any self-respecting geek loves possibilities.

I've got to keep the ambition in check now, so I'm thinking of splitting this into two separate projects. One will be the original anti-Defender, much simpler than it currently is with really basic wrapping terrain, maybe resort to a fixed height map, cut back the draw distance so I don't have to worry about that, etc. Then I can focus on that until it's done. I've got the basic infrastructure, I've got a lot of the geometry and OpenGL stuff sorted, I've got terrain and a path to get objects on screen. Now I need physics, AI and explosions and I'm pretty much there - the rest is just bells and whistles. I'll think about a Raspberry Pi port, then go to town on pretty effects for the desktop version.

Then once that's done, I'll pick up the complicated stuff in a new project. I was thinking of doing something a bit like Mercenary. But given how much time I'm spending on the first project, it may be a little while before I get around to doing that.

Thursday 4 April 2013

There is a light and it never goes out

I've not really done much on Offender over the last couple of months, instead I've been playing a lot - Deus Ex: Human Revolution, Crysis, Mass Effect 3 and a bit of Fallout: New Vegas. Kind of interesting that the more time I spend working with OpenGL the more I look at these games and think "how would I do that?"

I've started putting lighting into my terrain. For the uninitiated, 3D APIs generally use three types of lighting. Ambient lighting is applied evenly to all surfaces, modelling light that scatters and bounces all over the scene. Diffuse lighting takes a direct path from the source, but scatters when it hits a surface, so surfaces facing the source appear brighter no matter where the viewer is. Finally specular lighting reflects off shiny surfaces, creating highlights at points which reflect light back to the viewer. Fixed-function pipelines typically provided all of these, these days they all need to be implemented with shaders.

For my terrain I'm just using ambient and diffuse lighting. Specular highlights are more complex than the other two, and grass and rocks generally aren't all that shiny so it's not really necessary. Water is shiny, but I was thinking of doing that as a separate entity to the terrain with its own shader. With the default per-vertex Gouraud shading specular highlights can look a bit iffy - you really need Phong shading, but that's per-pixel so it inflicts a lot of computation on the fragment shader. Having said this, bump-mapping might be worth doing and that's also per-pixel - but that's one for later methinks.

Also most of the sample shaders I've seen convert everything into eye coordinates (i.e. relative to the viewer), whereas I've just left everything in world coordinates (i.e. relative to the world origin). This saves multiplying everything by the view matrix, and as ambient and diffuse lighting are viewer-independent it shouldn't make any difference. With the only light source being the sun at a fixed infinity, I've been able to get away with really simple shaders

However the lighting has emphasised a problem with my terrain generation, as there are prominent "ripples" across the surfaces. That'd probably look great on sand dunes, not so good on grass and rocks. This is almost certainly an effect of using an LCG for random number generation, though I've not done the maths to try and explain it properly - something to do with serial correlation I guess?

The reason I'm using an LCG is that it's fast, and I'm reluctant to move to a better algorithm if it's going to be prohibitively slow. I experimented with a CRC32 to get rid of those ripples, it looked a bit better but symmetrical - again there's probably a good mathematical reason for this. However combining an LCG and CRC produced decent results.

LCG-based terrain. See those ripples.CRC-based terrain. Strangely
symmetrical (and a bit ripply)
LCG/CRC combo. Much better.
To see if moving away from an LCG made terrain generation noticeably slower I added some crude timing info to my debug build. The total time for calculating the vertices alone went from about 5ms per tile (pictured terrain is 3x3 tiles) with just an LCG to around 50ms with the LCG/CRC hybrid, a 10x increase! That kinda vindicates my decision to go with an LCG in the first place. Switching to the release build it took about 5ms for LCG/CRC vertixes. Timing the rest of the initialisation for comparison, the only other significant block was the bit which copies data to OpenGL buffers at around 8ms. So very approximately the vertex generation with LCG/CRC is a third of the time taken to generate the tile, and the total time for the tile is around 16ms - a frame at 60Hz. I can live with that.

Incidentally, I made a few discoveries while doing this. Extracting textures from files takes a long time - ~220ms for 3 textures, about the same in release and debug. I was re-loading the same textures every time I generated a tile when I should be sharing the textures, so fairly obvious room for improvement there. I also found that if I invoked the release build from outside Visual Studio the textures didn't load. Some path issue I guess? I really must learn to put in helpful error messages rather than flippant remarks or swear words.

My next step was going to be expanding the area by generating terrain on-the-fly. Given that it's taking around a frame to generate a tile and the CPU has plenty of other things to be doing, I'd need split the generation across multiple frames in any spare time remaining before the end of frame. Unfortunately OpenGL's syncing options are fairly limited, so it'd be best done in a separate thread... and I really don't want to go multi-threaded yet, because threads are evil. Honestly, I'm still not great at C++ and having enough trouble debugging strange behaviour without threads introducing a bunch of concurrency issues and non-deterministic behaviour. So I'm going to park this idea until version 2.0. For now I'll either stick with a load of terrain generated at initialisation, or create some height maps.

Speaking of strange behaviour, I noticed that at certain points the player object would start to vibrate violently on screen. I realised it was actually the camera vibrating, it's just that the player is the only thing close to the camera, and the cause was numerical instability in my matrix inverse. That was solved by reordering the rows in the matrix, such that the element the algorithm pivots on is the one with the largest absolute value. Ultimately it was a simple fix, but it was an interesting problem as it illustrated the practical implications of a mathematical phenomenon. I chose to procrastinate on putting this pivoting in, so it just goes to show that taking shortcuts comes back to bite you in the long run so you're better off doing things properly in the first place.

Next up, time to divert my attention back to the player object as it seems faintly ridiculous to have vast swathes of detailed, accurately lit terrain with nothing but a matte purple wedge cruising over it.