Friday, 21 December 2012

Talkin' 'bout my generation

Most of the mini-milestones I set myself in my last entry have gone pretty well:
Collision detection is one thing, collision handling is another
  • I decided against switching matrix libraries. The boost one doesn't have a constructor, and that's the bit that's most iffy about my own matrix class. Once I'd implemented Gauss-Jordan elimination, pretty bloody well if I may say so myself, I had everything I needed. I also discovered that the uniform matrix functions in OpenGL have a built-in transpose flag, which is useful for overcoming the row-major/column-major issue.
  • Quaternions - not very intuitive, but dead easy once you get the hang of them. A single "orientation" quaternion can be directly translated into the model matrix, or can be multiplied by an arbitrary vector on the model to find where that vector lies in the current orientation. For example, if vertical thrust is in the Y direction in the model, multiply that by the orientation quaternion and you've got the new direction of thrust.
  • Mouse input seemed simple at first as I used WM_MOUSEMOVE, but the problem with that is it's bounded by the window or screen. It took me a while to find the right solution, many people seem to advocate moving the cursor back to the centre of the window, but I reckon the best way is to use raw input. Once you know how it's pretty simple and works beautifully.
  • A chase camera, as expected, was very easy once I had the stuff above in place. However it caused a lot of grief as I forgot to give it an object to chase, and I started getting some very weird errors - unhandled exceptions in crtexe.c. Turns out that's Visual Studio's special way of saying "segmenation fault" or "uninitialised pointer". Still, I got to the bottom of it fairly quickly and learned a lot about VS's heap debug features in the process.
  • Vertex buffers were again much easier than I thought. You just have to be careful to unbind the buffer when you're done or it'll confuse any subsequent OpenGL code which doesn't use buffer objects, and careful not to do out-of-bounds memory accesses or it can crash the video driver. I'm also using index buffers, they make my code a lot simpler and take up less memory. All in all I'm now able to have many more triangles on-screen without any creaking at the seams.
  • Collision detection is really quite hard. I'm just doing the most basic test - player collisions with terrain based on the player's "bounding sphere" intersecting with the terrain tile. Once again the coding isn't the problem - it's remembering all of the maths. How do you find the distance between a plane and a point again? Oh yeah... find the plane normal, scalar projection, Bob's your uncle. There's a lot more work to do here - I'll eventually have to do BSP trees I guess - but it's usable for now.
I still don't have much, but all the pieces are gradually coming together now, and it means I can go into more depth on specific things...

At the moment, the thing I'm getting most excited about is the terrain. Initially I thought I'd have a terrain map, wrapping at the edges as Lander did. But say I have a terrain map made up of 1024x1024 tiles, and only have one byte of terrain data per tile - that's a megabyte straight off the bat. For height and colour it's going to be at least 5 bytes per tile, and if I have multiple maps it could build up to quite a lot of data. I'd also like the possibility of large, open spaces where you can really build up some speed and not wrap too quickly,  which probably means much bigger maps than that.

Wireframe terrain maps: an 80s sci-fi staple
Big terrain maps mean lots of storage, potentially a large memory footprint to cache it, and a lot of design too, so I'm drawn to the idea of procedural generation. Here terrain is generated algorithmically from a pseudo-random sequence. Rescue on Fractalus! used this idea, but that was a bit too craggy and random. I could have a mix of designed levels dotted over the world, with generated terrain covering the gaps - much like Frontier, where core systems were scientifically accurate but the rest of the galaxy was procedurally generated. This is gradually turning into a homage to David Braben...

But back in the real world, the terrain doesn't warp around existing sites - structures are located in suitable sites in the existing terrain. So I think that's probably the way to go - generate large amounts of terrain randomly with procedural generation, then scout for suitable sites to put the levels and apply some "terraforming". I'm not sure how easy that would be in practise, and if I changed the generation algorithm then everything would have to be re-done. So for now I want to concentrate on the algorithm itself and get that nailed down.

A commonly-used method for generating terrain is the diamond-square algorithm. It's a pretty simple iterative method which is described very well on this page, so I won't repeat the explanation here. To generate pseudorandom numbers I'm using a Linear Congruential Generator, with the same parameters Donald Knuth himself uses for MMIX and an "Xn" formed by combining the x and z co-ordinates.

A mountain floating in the air kinda ruins the illusion of realism
The results are vaguely realistic-looking. I've applied some stock textures with transitions based on bands of height, and some very crude blending between them - it doesn't look brilliant but it's good enough for now, and it showed up a bug in my depth buffering which I hadn't noticed with wireframes or flat colouring.

The next thing to look at is what to do at the edges. The weedy solution would be to wrap, but because I can use use procedural generation to map out an essentially infinite area it'd be better if I generated more terrain. The problem is that I don't really want to have to draw an infinite area every frame, so I need to find some intelligent way of only storing the terrain for the local area and generating more terrain on-the-fly as the camera moves. Easier said than done, and it's going to get worse when I add diffuse lighting and need to calculate vertex normals for every triangle. but an advantage of the diamond-square algorithm is that because it's iterative you can easily generate some terrain in the distance at a low level of detail and apply more iterations to increase the detail as it gets closer.

Ideally I'd map out an entire planet. That'd be fantastic, but it's going to be tricky. The tiles that make up the terrain will no longer be relative to a horizontal plane, but the curved surface of the planet. The horizon will naturally limit the required draw distance at low altitude, but it'll need to increase at higher altitudes to the point where I can fit the entire planet on screen. This'll probably mean I'll have issues with depth buffer precision, which can lead to z-fighting, so at the very least I'll have to change the clipping planes as I zoom out, but I'll probably have to do multiple passes.

Still no physics, still no lighting, still using a placeholder for the UFO, still no sound whatsoever. Then I'm getting crazy ideas for little touches, like using GLSL shaders to model atmospheric refraction. And one day I'll port it back to the Raspberry Pi again. Plenty of stuff to do, so little time.

Tuesday, 27 November 2012

Crash! Boom! Bang!

The aforementioned freezes are back and getting a bit ridiculous now. The problem's not limited to OpenGL, it sometimes happens shortly after boot before I've run anything. Fortunately I now occasionally get useful error messages, so I've been able to do better Google/forum searches and apparently this is quite a widespread issue. Setting the USB speed to 1.0 seems to help quite a bit, performance still seems acceptable, but it's making the whole Raspberry Pi experience a bit frustrating at the moment.

I don't see any point in working with the Raspberry Pi in this state, definitely not any hardware project where there are likely to be power issues obscured by the USB problem. So it's with heavy heart that I'm moving my OpenGL coding over to Windows, which is a crying shame. I'll come back to the Raspberry Pi one day, hopefully soon, but for now I'm left feeling that I got mine a bit too early, more so now there's a rev 2 board and more recently it's being shipped with 512MB as standard. Maybe I'll blow mine up with a hardware project and have an excuse to buy a new one?

OpenGL is intended to be cross-platform, and in past projects I've had it up and running on Windows and Linux very quickly. The first problem with OpenGL in Windows is that the maximum version supported is OpenGL 1.1, which was released way back in January 1997 when the likes of the 3dfx VoodooMatrox Mystique and PowerVR series 1 were all the rage, as indeed was the Rage. v1.1 has been fine for me in the past, but if I want to use the same features that are mandatory for OpenGL ES 2.0 (primarily shaders, introduced to desktop OpenGL in v2.0) then I need something more up to date.

You can't upgrade Windows to a newer version of OpenGL as far as I can tell, to get more up-to-date feature support you have to add individual features as extensions. Thankfully this can be handled by the GL Extension Wrangler Library (GLEW). It's a bit of a pain to set up, and when I thought I'd managed it both the static and the dynamic library refused to link no matter what I did, so I ended up importing the GLEW source into my project.

And then I think I found a bug in Visual C++. I've got a square matrix class template which takes a value type and a dimension. Its only member variable is an array which contains the elements, and there are member functions to assign values, do multiplication of two matrices, etc. The default constructor does nothing and, as I'm not ready for the brave new world of C++11 yet (given that VC++ has enough trouble getting C++98 right), I assign values with a redefined operator= which copies data out of an array, or another constructor which takes an array. When I created some arrays to do this, and then declared the matrices, I found some really weird stuff going on. If I just did the matrix declarations, no copying, all of the matrices had the same pointer. If I passed the arrays to the matrix constructors, or assigned them with operator=, then each matrix would have the same pointer as one of the arrays, but not the array that was assigned to it. If I made the arrays static (which is perhaps the right thing to do anyway) then everything was fine. What on earth could cause this? Just my own incompetence? The same code worked OK in g++. As soon as I've found a minimal example of this going wrong I'll submit it to MS.

After I'd worked around that, and remembered to actually call the function which initialised my OpenGL shaders (took me two days to work that one out), worked out how to use a class method as a custom message handler, tried GDI+, failed to get it working and reverted to OLE (about a month on that, admittedly much of it spent being too frustrated to progress and playing Skyrim instead) I was back to where I'd got to on the Raspberry Pi. I was doing a simple rotation about the X-axis, but when I set up the perspective projection matrix properly I got oscillation in the Y direction in time with the rotation. This didn't happen with orthographic projection, so surely I'd done something wrong with the projection matrix? Turns out it was fine, but GLSL stores matrices in column-major format whereas C arrays are effectively row-major. Transpose the final Modelview-Projection matrix and hey presto... everything working beautifully.

Sorry Chloe, Little Teddy turned out to be an intergalactic
criminal mastermind so we had to send him to the Phantom Zone
I've now moved away from "yayy, it works!" and started structuring things a little better for Offender (still need a better name). Rather than continuously drawing a load of triangles, I've got object classes with drawing and moving methods, and separate drawing routines for terrain. Now I can build up a list of objects, it's actually starting to look like the beginnings of a game. However, now I'm putting in more stuff I've found that it bellies-up and dies at around 17,000 triangles. At 60Hz that's about a million a second, which seems a bit low. Admittedly there's still a lot of room for improvement - I'm not using vertex buffers for example - but sorting that out is secondary as I don't need huge numbers of triangles on-screen (yet). All I really need is a single object and some terrain for context, hence the rather psychadelic effort shown here.

In spite of being a lot more complex under the hood, on the surface it's still a bit "Hello Triangle!". Next steps:
  • Maybe use someone else's matrix library, for all the usual reasons people use standard libraries. Why go to the trouble of implementing a matrix inverse when someone's got a tried-and-tested implementation already? The ever-dependable Boost has a matrix library, but I don't think it's quite what I want.
  • Do object positions by coordinate and rotations by quaternion, rather than matrix, so it's easier to move things around. I've already got much of the code for this in my OpenGL screensaver.
  • Add mouse input and player control. Easy for Windows, I'll leave Linux to another day.
  • Add a chase camera to follow the player object. Should be dead easy once I've done all of the above.
  • Add collision detection. Though it's not hard to knock together a crude algorithm, it's difficult to do collision detection accurately and not slaughter your CPU in the process. I've had loads of ideas about this, found a guide on the subject and looks like I was definitely thinking along the right lines. I'll start with something pretty crude though - if I could just make the terrain solid so I can't fly through it, that'd be a start.
  • Switch to using vertex buffer objects, maybe use index buffers too, as I'm going to need more triangles sooner or later. I'll probably want texture buffers too.
Once I've done all that, I think I'll have all of the major boilerplate in place and can actually start building up the interesting stuff.

And finally, you may have noticed that I'm a bit of a David Braben fanboy, so please support Elite: Dangerous. 'Tis a worthy project.

Tuesday, 25 September 2012

Interplanetary, quite extraordinary craft

It's been a bit of a disjointed week for my geekery, lots of little bits and pieces.

Screenshots - Having had no luck with existing apps, I asked on the Raspberry Pi forums and the only suggestion I got was to use glReadPixels. This requires screenshot dumping code to be written into the app generating the framebuffer, which is perfectly doable and the code should just be boilerplate. With libjpeg to compress the raw pixels, it works a treat. I'm wondering if it's worth writing a standalone capture app, assuming that would actually work, or if a portable function is adequate, perhaps even better.

Freezes - Since I started doing more and more complex stuff with OpenGL ES 2.0 I've been getting regular freezes. These were so bad that everything locked up, the network died, no debug was dumped anywhere that I could see, even the keyboard died so the Magic SysRq keys were of no use. I was just about to start the old comment-things-out-one-at-a-time trick, when a Raspbian update was released and sort of fixed it. It now seems to run indefinitely without freezing, but sometimes the USB hub spontaneously dies even though the graphics still keep going. I've plugged my keyboard directly into the RPi now, and it seems to be OK.

3D modelling - While manually-constructed vertices are fine for hello triangle, they're not really feasible for bigger things. So I've downloaded Blender and started learning how to use it. It's not hard, there's just a lot to learn. Thankfully there are some excellent tutorials to get started with. The biggest problems I'm having are my lack of artistic ability, and trying to avoid making my alien craft look like anything from any movie or game I've seen. At the moment it looks like it came right out of Elite. I'll get better, hopefully.

Flight physics - For my anti-Defender (working title: "Offender", better suggestions welcome) the centrepiece is going to be the alien craft. When I think of the archetypal UFO, I think flying saucer - something which doesn't look terribly aerodynamic and just hovers in the air, better suited to interstellar travel than air-to-air combat. The kind of craft I'm picturing is based on that, but has been adapted to fly at speed in the earth's atmosphere. I want something which flies like nothing on earth, but obeys the same laws of physics that earthly craft are bound to and depend upon. I'm going to have to work out the physics with little-to-no knowledge of aeronautics. Here goes then...

Whereas a fixed-wing aircraft uses its wings to generate lift, the craft I picture will have some kind of anti-grav thing propelling it upwards. How would that behave differently to wings? It'd make lift more or less constant, not dependent on velocity or angle of attack, and there'd be no ceiling. The thrust would have to be manually varied with the angle of climb or descent or there'd be a kind of lift-induced drag - in a vertical cimb it'd fall backwards. Hinged ailerons or a rudder wouldn't be practical so the anti-grav would need to vary to generate pitch and roll. If there were multiple upwards anti-grav thrusters, then increasing thrust on one side while decreasing on the other should accomplish this and maintain stability. Yaw would require horizontal thrust, and maintaining the ability to roll in a vertical climb would require downward thrust.

With a half-decent physics model I think I can get that to work, and also have human aeroplanes, helicopers and missiles behaving with a moderate degree of relism. The trick is going to be getting the level of complexity right so it's accurate enough but isn't computationally intractable, especially if I want this to run on a Raspberry Pi. I'm hoping that by modelling a few simple laws of physics, higher level effects will just drop out - for example, modelling angle of attack should correctly should result in stalls. After thinking through the basics it's clear that the most difficult and important bit is going to be aerodynamics and drag.

When an aircraft stalls and tailspins, why does it turn into a nosedive? It can't be its weight distribution as net weight acts through the centre of gravity and imparts no torque. It's got to be drag. When an aircraft rolls, why does the roll not accelerate? It's got to be some kind of angular drag. Also, if my craft is going to be capable of entering the atmosphere from space, drag would determine how much heat gets generated. As I understand it, for an accurate drag model you need to be able to assess the drag coefficient for every possible angle of attack. I basically need a virtual wind tunnel. That's going to be fun... there's got to be a way to simplify it.

I'm kind of looking forward to trying this out, I don't think it would take a huge amount of effort to get to the stage where I have a craft (even if the model is just a placeholder), some control and some basic physics to try out. I'd need to provide some terrain for context if nothing else. No proper collision detection for a while - that's a whole new can of worms - but I should be able to add something which causes a splat or bounce at zero altitude. So far I'm still plucking bits of boilerplate from the stuff I've done so far, making it as generic and reusable as possible, and building up some library functions. Might as well do it properly from the start, eh?

And finally some linkage, as I thought this was pretty cool: Kindleberry Pi

Sunday, 16 September 2012

They have a fight, triangle wins

I'm just a dabbler in OpenGL really. Come to think of it, all I've really done is the most basic geometry and transforms, I've not even done texture mapping (though I've done that in DirectX). As I've had an interest in 3D graphics since college, it's something I want to get more practice at.

The Raspberry Pi supports OpenGL ES 2.0, which is also used in some of the newer mobile phones and tablets. The main difference between ES 2.0 and the OpenGL I'm used to is that it's built around a programmable pipeline, which in practise means that a lot of the core functionality I've taken for granted has been removed.

OpenGL uses little programs called shaders, which are used to configure the graphics hardware or software to apply various transforms or effects. They're written in the imaginatively named GL Shader Language (GLSL) - put the code into a string and pass it to OpenGL, it'll get compiled at runtime and applied to the appropriate data.

I've never done anything complicated enough with OpenGL to warrant writing a shader - simple stuff is taken care of by the core functions - but in ES 2.0 even the simplest of tasks requires a shader. For example, to project the 3D image onto a screen - something anyone doing 3D work is going to want to do - I'd normally set up the projection matrix. But that's gone in OpenGL ES 2.0, you have to put together the matrix yourself and manually apply it to each vertex with a vertex shader. Apparently this isn't unique to OpenGL ES - "desktop" OpenGL 3.1 has got rid of the projection matrix too - so this is something I'm going to have to get used to.

There are good reasons for this - it makes the API simpler and more flexible for advanced users - but it does make it harder for the beginner who has to do a lot of work to get the simplest thing up and running. It also means that the code isn't backwards compatible with OpenGL ES 1.1 and OpenGL 3.0, which is a pain as I like to move my code around onto different platforms.

I've not found a really good guide or tutorial for OpenGL ES 2.0 in C++ (perhaps I should write one?), but by taking snippets of code from various webpages I've managed to cobble together a "Hello Triangle!" It's quite epic at over 300 lines, but there's so much to set up I'd struggle to make it shorter. In the middle of  doing this my HDMI->DVI adaptor finally turned up, so I've been able to plug my Raspberry Pi into a monitor and get my red triangle in glorious 1280x1024, instead of the crappy interlaced PAL which as we all know is 720x576.

After that initial hump, getting something a little more complicated working was relatively easy. The tutorial code was all in C, so I made it a bit more C++-like with some juicy classes, call-by-reference, iostream instead of stdio, and getting rid of explicit mallocs where possible. "Hello Triangle!" was using the native display (i.e. no windows), so I added the option to use XWindows instead where it's available. Turns out there's a problem with the Raspberry Pi implementation of X which prevents this from working, so I've abandoned that for now. Then I learned how to do rotations with the vertex shader - which was fairly easy once you have the right matrix and can remember how to do matrix multiplication - and texture mapping with the fragment shader - which is far more complicated than I expected.

The end result was something which mapped a picture to a diamond shape and flipped it over and over, which I'm calling Phantom Zone. I've not worked out how to do screenshots without XWindows, so no pretty pictures this time. It's not much, but it's been useful for picking up the basics. Unfortunately there's a bug where it crashes the Raspberry Pi so badly that all remote connections are killed. I've no idea how I'd even begin to debug that one.

Now I've got the basics working, I'm coming up with ideas for something a bit bigger. When the Acorn Archimedes first came out it was bundled with a demo called Lander. Written by the legend David Braben, it later became the full game Zarch and was ported to other platforms as Virus. I think something along those lines would be fairly simple to do with the benefit of hardware accelerated graphics, at the very least I could get the terrain and craft working.

If that goes well I thought about turning it into a kind of reverse Defender, where you pilot a lone flying saucer and have to abduct people while avoiding increasingly aggrivated human defences. That's the kind of idea I can pick up and run with all day, indeed I've already lost a few hours of sleep thinking through the physics alone... but I'm not going to reveal any of the ideas here yet, I'll see how many of them I can put into practise.

Six entries in and I've written a lot about what I'm going to do, and time spent learning the basics, but I haven't actually achieved much. I'm kind of enjoying playing with software for the time being, though a part of me is itching to do the robot and knows that once I've drawn out some schematics I can start buying parts.

Monday, 27 August 2012

Hello, is it me you're looking for?

Elegant simplicity. Those were the days *sniff*.
(Atari 800XL emulated with Atari 800 Win PLus)
It never ceases to amaze me how satisfying "Hello world!" can be.

It seems so trivial now, but way back in the proverbial day the shortest BASIC program felt like a triumph.

Nearly 30 years on, the language may have changed but it's still satisfying proof that you can get something to work. In this instance it took seconds to write some C++, a couple of hours spread over half a week to work out how to cross-compile from Ubuntu to Rasbian. The relief at finally seeing those immortal words was immeasurable. Could be worse I suppose - it took researchers two years to write the first non-trivial program in Malbolge.

Virtual machine, Makefile, C++, compile, SFTP, SSH session,
and it's done. This is progress, people! Or horses for
courses, you decide.
To cut a long story short, the ARM cross-compile toolchain in the Ubuntu repos is set up for ARMv7 onwards whereas the Raspberry Pi has an ARMv6. With just a few parameters in the makefile you can tell it to compile for ARMv6 with hard float, but you also need to get hold of some ARMv6 static libraries - I just copied them over from the RPi. There's a toolchain on the Raspberry Pi repository on github which already has the correct parameters and libaries, but it's 32-bit so I had to install some 32-bit libraries to get it to run on my 64-bit Ubuntu virtual machine (libc6-i386 and lib32z1 if you're asking). It worked on 32-bit Ubuntu with no fiddling at all, but it took me a week's worth of evenings to figure that out.

This is all well and good when I'm only using the most basic libraries which come with the toolchain, but to do more interesting stuff I'll be needing more libraries. I still don't think I've found the ideal solution to this: at the moment I'm rsyncing the Raspbian libraries over from the Raspberry Pi. Compiling libraries from source seems a bit of a wasted effort. I wonder if I can set up APT to get the Raspbian libraries directly from the repo?

I've managed to get the core part of my OpenGL screensaver working on the Raspberry Pi now. It's incredibly slow, and I think that's because I'm using the OpenGL libraries which aren't supported by the GPU, so it's using the Mesa software drivers instead. To use the GPU I'm going to need to switch to Open GL ES and EGL, and that's a whole new can of worms but it's ultimately what I wanted to do anyway.

So it's been a bit of a frustrating fortnight, I've broken my update-a-week guideline, and spent too much time floundering around trying to understand the infrastructure with too little tangible progress. Having said that, floundering around for a while is fine so long as something is learnt in the process, and I think I've learnt a lot more about the GNU C++ toolchain, in particular the linker. Hopefully I'll start getting my head round OpenGL ES soon, I think it's mostly the same as regular OpenGL and it's just a matter of appreciating the differences.

Wednesday, 15 August 2012

We are dancing mechanic

Just as I've started thinking about making a board for a Raspberry Pi based robot, someone goes and makes the mother of all robotics boards - the Gertboard. Curse you Gert Van Loo! Actually it's a very nice piece of kit, but it's bigger and more expensive than the Raspberry Pi itself, and I don't think it's really what I want. It's given me a few ideas though.

Some circuitboards, yesterday
The most powerful component on the Gertboard is the Atmel AVR microcontroller, which is the same chip that's at the heart of the Arduino, but it seems like overkill for my purposes. Do I really need a microcontroller on the robot when I've got a Raspberry Pi sitting next to it? It's cheap, and looks like it's got on-chip D/A converters so I could probably use it to drive the motors with variable speeds, but if I wanted that I could do it more simply and cheaply with my own circuit.

The Gertboard manual suggests that you can vary speed with pulse-width modulation (PWM), but the problem is that the Raspberry Pi only has one part-time PWM output and the switching on the GPIOs isn't fast enough for me to do botched up PWM on those. Maybe one PWM output is enough though - rather than connecting four GPIOs to the motor controller inputs, use the GPIOs to gate the PWM input - yeah, that'd work.

I'd been looking at a separate motor controller board, but wouldn't it be neater and cheaper to put a controller chip on my own board? The Gertboard has an L6203, but that's more expensive than the controller board, is specced way beyond what I need, and only has one channel whereas I'd like two. I'll have to look around and see if I can find the same IC that's on the controller board, or something similar. Would PWM work with the motor controller? Gert reckons it will with his board, and who am I to argue?

Wearing a bra on your head while working on geek
projects turns them into Kelly LeBrock. That guy
should know, he's Iron Man FFS.
Now I'm trying to remember the circuits to do some of these things, but I've forgotten pretty much all of the analogue electronics I learnt at college. Thankfully the intaweb is helping to jog my memory a bit. If I do want variable speed, gating a PWM signal should just be AND gates so that's easy. But I'll probably want something to convert that to a stable DC analogue voltage, which is... an integrator? Can the motors take a PWM input? The GPIOs are 3.3V and the motors take 6V, so I need to step that up and be able to supply up to 2A for each motor. An OP-AMP with positive feedback isn't going to supply enough current so I'll need a hefty power transistor in there... whaddya know, I just re-invented the Low drop-out regulator... Is there an off-the-shelf LDO which lets you provide a variable reference voltage? That's basically what's in a motor controller, so I've come full circle - might as well just get one of those.

As I'm so out of practice with boards I'm a bit apprehensive about doing a complicated board in one go. It's tempting to do a smaller board first, or at least do a larger board in phases and test it between each phase. Just getting battery power to the RPi via a switching regulator would be a good start and confidence boost. At the very least I've got to start getting some schematics drawn up, not worry about exact capacitor values or board layout yet, but just start turning the morass of nebulous ideas into something a bit more coherent.

Whatever I do with the robot, it's going to need some software so that's where I'm spending my time at the moment. I'm trying my hand at cross-compiling as it's going to be faster than compiling on the RPi itself, and it's also harder than a native compile so more interesting. As I've been playing with DOSBox recently I downloaded the source for that, got the Raspbian toolchain and... wait a minute, where's the Makefile? There's a Makefile.am and a Makefile.in, what're they? Turns out they're for the GNU Autotools, which I've never used before.

Learning how to use those and how to cross-compile was a bit much to take on in one go so I took a step back and started with "Hello world". Once I'd mustered the presence of mind to write 5 lines of code which actually compiled natively, I'd got the right toolchain for compiling C++ from Ubuntu to the Raspberry Pi (g++-arm-linux-gnueabihf seems the best bet), and I'd sorted out the libraries, I got an executable which I FTP-ed over to the RPi, SSH-ed in, ran the executable, aaaand.... segmentation fault, my old friend. Next time my "Hello World" is just going to print "Segmentation Fault" so it'll look like it worked whether it did or not. It's barely worth opening up gdb, with a program that simple it's going to be a linking issue so it's probably just a matter of getting the right options in the makefile.

In other news, I've continued getting things to work on DOSBox. Speedball 2 works very nicely, it sounds absolutely awful but early PC games always did. I got UFO: Enemy Unknown working briefly but it was very slow and after I'd faffed about with the config it stopped working. It looks like I should give up on protected mode games until protected mode is improved, which it probably never will be, or DOSBox starts using SDL 2.0 with its improved use of OpenGL. And it seems that others are having much more luck with MAME than I did on my half-arsed attempt, so maybe I'll give it another go one day.

Monday, 6 August 2012

We've only just begun

I finally got a chance to play with the Raspberry Pi and it's been a doddle to set up so far.

After reading some of the horror stories, I half expected some problems getting the RPi to start up at all but thankfully had none. I guess my paranoia helped as I'd read all about the power issues, purchased a half-decent power supply and a really good powered USB hub.

My only video option at the moment is composite to my telly. Yes, I know most self-respecting gadget-philes bought HD flatscreens with HDMI years ago - myself and my partner both bought 21" Sony Trinitrons over a decade ago, we don't want to get a new telly until at least one of them expires, and the damn things are just too well-built, refusing to die. I'm using a composite + phono to SCART adaptor, combined with composite and 3.5mm to phono cables running from the RPi, and the picture isn't too bad. There's a bit chopped off the left-hand side, the interlacing looks awful sometimes and text in a terminal window is a bit blurry, but it's generally good enough for most stuff and it'll do until my HDMI-DVI adaptor turns up.

Bear in mind that though I've had close to 20 years of experience as a UNIX/Linux user, I've not really done much in the way of installs or admin beyond switching window manager. I've installed Ubuntu on my netbook about a million times, but that practically installs itself. I've been learning as I've gone along, trawling the intaweb for guides, supplementing them with a great deal of educated guesswork and trial-and-error.

For the Raspberry Pi I'm using Raspbian, as it's now the recommended Linux distribution and the optimisations are coming thick and fast. At initial startup there's a config utility to help sort out the important stuff. I'm not going to be doing anything clever with graphics yet, so memory's shared 7:1 between CPU and GPU. I thought I'd give the LXDE window manager a go to start off with, but at PAL resolution it looked a bit cluttered and with the interlacing it was a little too reminiscent of Amiga Workbench for my liking, so I turned it off again and turned on SSH.

SSH just worked, with no fuss whatsoever. I got the IP address from my router (though "ifconfig" on the RPi also does the job), put it into PuTTY and that was it - remote access to the RPi. Xming was also fairly easy to set up without really knowing what I was doing - initially the RPi's IP address was being blocked so I added it to Xming's X?.hosts file, set the $DISPLAY variable, et voilĂ . Then I undid all that and set up X11forwarding in PuTTY to sort out the display for me. Wireless was a lot more tricky to set up, but once I'd read a few guides (the debian one was the most useful), convinced myself that the dongle wasn't broken, figured out that you need to be root to scan for interfaces, learnt the syntax of the interfaces file, realised that WPA needs a completely different set commands to WEP, and worked out how to generate a PSK, I was up and running.

Setting those things up meant I could remove the mouse and keyboard, disconnect the telly, and unplug the Ethernet cable tying it to my router. I've now got the RPi on my desk instead of a footstool by the telly, with just the hub/wireless plugged in. It's a useful setup for playing around, getting better stuff set up, taking screenshots, perhaps natively building C++, but what to try first? For low-effort instant gratification, how about some old games?

Tiny amount of activity in game, sshd CPU usage spikes
I mentioned Beneath a Steel Sky in my previous post, and that's free on ScummVM so I gave it a go... perhaps unsurprisingly it was very, very slow over a remote display and there was no audio. When I switched back to using the telly it was nigh perfect and really fast, huzzah! There was the odd crackle and pop over the speakers, though I've no idea whether that was the SCART connector, phono cable, socket, board, drivers, feedback from the USB hub or just the incessant ALSA underruns you get with a busy CPU. Back to the SSH connection again I could see the CPU usage of the sshd (SSH daemon) process spike when anything moved. Turning off X11forwarding and going back to insecure X?.hosts/$DISPLAY didn't make much difference, but VNC was a bit better.

It took over an hour just to get to kick-off for this
screenshot
Next up, Sensible World of Soccer running on DOSBox. First job: finding the CD, which took a while as it was in storage, in one of the many boxes of stuff we rarely use in the spare room. You may be shocked to learn that the Raspberry Pi doesn't have a native CD drive, and I don't have one with a USB connection, so I tried to find an ISO maker which was a) free and b) actually worked. Eventually I settled on ISODisk, which was a bit flakey but did the job nicely. Once SWOS was running it was ridiculously slow with Xforwarding, and didn't work at all with VNC. On the telly there was a keyboard issue, but once I'd sorted that out it was still rather slow. It could probably be tweaked a bit, but SWOS runs in protected mode which is notoriously slow on DOSBox so I think the RPi's 700MHz processor doesn't quite cut it. That's a shame, I was quite looking forward to having SWOS on a little box under my telly - though I've discovered that there's a Wii version of DOSBox available...

I've also had a look at MAME, but there's some weird video driver issue I'll have to get to the bottom of. If I do get it working, I'll really want to get my gamepad working too but that looks a bit tricky under Linux. It's a Microsoft gamepad, so what are the chances of an official Linux driver?

Perhaps trying to use the Raspberry Pi as an emulation machine is a bit optimistic as its processor isn't great. All the graphics is going through the CPU, most emulators don't take advantage of the GPU. ScummVM works well but as I understand it that's not really an emulator, it's a new implementation of the interpreter. But it's been a useful exercise in just getting stuff working and probing the limitations of the box. For my next trick I think I'll try something a little more suitable. Maybe I'll learn how to use OpenGL ES? Well lookie here, the Quake II source code...