Friday, 21 December 2012

Talkin' 'bout my generation

Most of the mini-milestones I set myself in my last entry have gone pretty well:
Collision detection is one thing, collision handling is another
  • I decided against switching matrix libraries. The boost one doesn't have a constructor, and that's the bit that's most iffy about my own matrix class. Once I'd implemented Gauss-Jordan elimination, pretty bloody well if I may say so myself, I had everything I needed. I also discovered that the uniform matrix functions in OpenGL have a built-in transpose flag, which is useful for overcoming the row-major/column-major issue.
  • Quaternions - not very intuitive, but dead easy once you get the hang of them. A single "orientation" quaternion can be directly translated into the model matrix, or can be multiplied by an arbitrary vector on the model to find where that vector lies in the current orientation. For example, if vertical thrust is in the Y direction in the model, multiply that by the orientation quaternion and you've got the new direction of thrust.
  • Mouse input seemed simple at first as I used WM_MOUSEMOVE, but the problem with that is it's bounded by the window or screen. It took me a while to find the right solution, many people seem to advocate moving the cursor back to the centre of the window, but I reckon the best way is to use raw input. Once you know how it's pretty simple and works beautifully.
  • A chase camera, as expected, was very easy once I had the stuff above in place. However it caused a lot of grief as I forgot to give it an object to chase, and I started getting some very weird errors - unhandled exceptions in crtexe.c. Turns out that's Visual Studio's special way of saying "segmenation fault" or "uninitialised pointer". Still, I got to the bottom of it fairly quickly and learned a lot about VS's heap debug features in the process.
  • Vertex buffers were again much easier than I thought. You just have to be careful to unbind the buffer when you're done or it'll confuse any subsequent OpenGL code which doesn't use buffer objects, and careful not to do out-of-bounds memory accesses or it can crash the video driver. I'm also using index buffers, they make my code a lot simpler and take up less memory. All in all I'm now able to have many more triangles on-screen without any creaking at the seams.
  • Collision detection is really quite hard. I'm just doing the most basic test - player collisions with terrain based on the player's "bounding sphere" intersecting with the terrain tile. Once again the coding isn't the problem - it's remembering all of the maths. How do you find the distance between a plane and a point again? Oh yeah... find the plane normal, scalar projection, Bob's your uncle. There's a lot more work to do here - I'll eventually have to do BSP trees I guess - but it's usable for now.
I still don't have much, but all the pieces are gradually coming together now, and it means I can go into more depth on specific things...

At the moment, the thing I'm getting most excited about is the terrain. Initially I thought I'd have a terrain map, wrapping at the edges as Lander did. But say I have a terrain map made up of 1024x1024 tiles, and only have one byte of terrain data per tile - that's a megabyte straight off the bat. For height and colour it's going to be at least 5 bytes per tile, and if I have multiple maps it could build up to quite a lot of data. I'd also like the possibility of large, open spaces where you can really build up some speed and not wrap too quickly,  which probably means much bigger maps than that.

Wireframe terrain maps: an 80s sci-fi staple
Big terrain maps mean lots of storage, potentially a large memory footprint to cache it, and a lot of design too, so I'm drawn to the idea of procedural generation. Here terrain is generated algorithmically from a pseudo-random sequence. Rescue on Fractalus! used this idea, but that was a bit too craggy and random. I could have a mix of designed levels dotted over the world, with generated terrain covering the gaps - much like Frontier, where core systems were scientifically accurate but the rest of the galaxy was procedurally generated. This is gradually turning into a homage to David Braben...

But back in the real world, the terrain doesn't warp around existing sites - structures are located in suitable sites in the existing terrain. So I think that's probably the way to go - generate large amounts of terrain randomly with procedural generation, then scout for suitable sites to put the levels and apply some "terraforming". I'm not sure how easy that would be in practise, and if I changed the generation algorithm then everything would have to be re-done. So for now I want to concentrate on the algorithm itself and get that nailed down.

A commonly-used method for generating terrain is the diamond-square algorithm. It's a pretty simple iterative method which is described very well on this page, so I won't repeat the explanation here. To generate pseudorandom numbers I'm using a Linear Congruential Generator, with the same parameters Donald Knuth himself uses for MMIX and an "Xn" formed by combining the x and z co-ordinates.

A mountain floating in the air kinda ruins the illusion of realism
The results are vaguely realistic-looking. I've applied some stock textures with transitions based on bands of height, and some very crude blending between them - it doesn't look brilliant but it's good enough for now, and it showed up a bug in my depth buffering which I hadn't noticed with wireframes or flat colouring.

The next thing to look at is what to do at the edges. The weedy solution would be to wrap, but because I can use use procedural generation to map out an essentially infinite area it'd be better if I generated more terrain. The problem is that I don't really want to have to draw an infinite area every frame, so I need to find some intelligent way of only storing the terrain for the local area and generating more terrain on-the-fly as the camera moves. Easier said than done, and it's going to get worse when I add diffuse lighting and need to calculate vertex normals for every triangle. but an advantage of the diamond-square algorithm is that because it's iterative you can easily generate some terrain in the distance at a low level of detail and apply more iterations to increase the detail as it gets closer.

Ideally I'd map out an entire planet. That'd be fantastic, but it's going to be tricky. The tiles that make up the terrain will no longer be relative to a horizontal plane, but the curved surface of the planet. The horizon will naturally limit the required draw distance at low altitude, but it'll need to increase at higher altitudes to the point where I can fit the entire planet on screen. This'll probably mean I'll have issues with depth buffer precision, which can lead to z-fighting, so at the very least I'll have to change the clipping planes as I zoom out, but I'll probably have to do multiple passes.

Still no physics, still no lighting, still using a placeholder for the UFO, still no sound whatsoever. Then I'm getting crazy ideas for little touches, like using GLSL shaders to model atmospheric refraction. And one day I'll port it back to the Raspberry Pi again. Plenty of stuff to do, so little time.

No comments:

Post a Comment