Does anyone remember when Apple brought out the first Power Macintosh, in the early 1994? I do, because I was the Mac specialist in a university Computing school at the time, and the advent of the Power Mac was hailed by Apple as the new dawn of desktop computing.

On the face of it, the Power Mac was fabulous: the only problem was that it was a completely new architecture, which meant existing software for the Motorola 680x0 range of processors simply wouldn't work with it.

Apple's idea at the time was to build a 680x0 emulator so that we could run the old Mac software on the new PowerPC processor. A fine idea in principle – you take a powerful new PowerPC processor, run a program on top of it that presents what looks like a 680x0 to anything higher up, and then run the old programs on top of this new layer. And it was bloody awful.

Now come right up-to-date and look at the new version of the Mac operating system, MacOS X. Although still running on a PowerPC processor, there's a different problem – the operating system kernel itself has changed, which means that software writers have a bunch of different system calls to make. So again, an application won't run unless you either (a) rewrite it, or (b) plonk an emulator on top of MacOS X to make it appear to the application as if it's an older version. To ensure that users don't have to rush out and spend thousands on the MacOS X versions of their software, Apple have built another emulator. And it works just great - you'd never really know that it wasn't really MacOS 9 that's running under there.

What changed?
So what's the difference between these two scenarios? First off, technology speeds have scaled. A program that required a certain percentage of a processor's time in 1994 would now require a fraction of that figure today. So if it were possible (and if someone could be bothered) to run the old 680x0 emulator from the first Power Macs on a modern G4, they'd wonder what all the fuss was about – it would fly like the wind on modern hardware.

Second is the fact that with the transition to MacOS X and its new Mach-based Unix kernel is merely a change in the underlying software rather than a change of hardware platform. In the average case, software that emulates one type of hardware on top of another, different type will be far more complex than software that maps one set of software system calls onto another within the same hardware platform. This is particularly true when one considers that the 680x0 family (a CISC processor) and the PowerPC (a RISC processor), despite having part of their heritage in common thanks to Motorola's involvement, are very different beasts, and the task of mapping the functions of one onto the other is far from simple.

The third aspect of the task, and although perhaps a lesser concept but by no means a negligible one, is that programming techniques and systems have developed significantly in the ten years or so since the Power Mac arrived. Code optimisers have improved dramatically, as have algorithm design techniques, which means that if we were to apply modern programming systems to 1994 technology, the resulting programs would fly faster on the old hardware than the software of the time.

Modern emulation
Now that the technology and the techniques have come of age, emulation is seeing more and more popularity in real-world applications. Unsurprisingly, given the observations we made earlier regarding the complexities of emulating one hardware platform on another, we don't often come across serious attempts to produce "virtual machines" that try to (say) make 80486 code work on an IBM mainframe. Such emulators do come along from time to time, but they're more in the spirit of, for example, running a Commodore 64 or ZX Spectrum emulator on top of a Windows XP machine (remember – these were based on very simple, very slow processors, and so performance isn't an issue).

What we do see a lot of, though, is operating system emulators. There is, after all, a real business need to be able to, say, run a Linux window on top of MacOS, or to run a Windows window on a Linux-based desktop PC or terminal server. And with today's technology, these things work, and work extremely well (often surprisingly fast).

Emulation will, however, always be slower than the real thing, and so to use an emulator you have to have a good reason. After all, you could run Windows 98 faster by booting a PC in Windows 98 than you can by running it inside a window under (say) Red Hat Linux 9, as we do on one of our lab machines; the benefit from emulation, though, is that you can run both at once instead of constantly rebooting the computer.

In short
It seems, then, that emulation is now a feasible concept, and that it will thrive, but only in a software sense in the mainstream. Because you're adding a layer of software (and thus some more code to steal processor cycles) between the OS and the program, an emulated world will inevitably be slower than a native one, but with today's technology this slowdown is often minimal and is more than offset by the benefits of not having to constantly switch between native OSs.