In the early days of computers, the operating system -- the software that manages applications and system resources -- didn't exist. That's because software itself didn't exist. Computers were programmed by manually wiring calculating circuits together, then physically changing the wiring for every new calculation to be performed.

That began to change in 1945, when John von Neumann described a computer that could store instructions in memory that would tell the computer's calculating units what to do. On June 21, 1948, the first computer program ran on an experimental machine called Baby at the UK's University of Manchester -- and software was born.

But computers still ran only one program at a time. Operators still had to load each program and its data into memory from tape or punched cards, run the program and then repeat the whole process for the next program. It was faster than changing the wiring, but it still wasted plenty of very expensive computer time.

By the early 1950s, businesses using computers were looking for ways to solve that problem. In 1955, programmers at the General Motors Research Center came up with a solution for their IBM 701 computer: a batch-processing monitor program that let operators put a series of jobs on a single input tape. It was the first step toward a full-scale operating system.

Computer vendors soon offered their own batch monitors. In the early 1960s, they began to add what would become critical operating system features. The Burroughs 5000 Master Control Program offered virtual memory and the ability to run several processes at once. Univac's EXEC I allocated memory, scheduled CPU time and handled I/O requests. IBM's OS/360 allowed the same software to run on a variety of machines.

In 1963, a team at MIT led by Fernando Corbato developed the Compatible Time Sharing System (CTSS), the first practical OS that let several users at once run programs from terminals. Much of that team soon went to work on a far more ambitious OS: Multics, a joint project with General Electric Co. and AT&T Bell Laboratories that would offer a tree-structured file system, a layered structure and many other modern OS features.

AT&T pulled out of the Multics project in 1969. But AT&T programmers Ken Thompson and Dennis Ritchie began to develop their own scaled-down version of Multics, which they punningly called Unix. Unix was easy to port to new computer architectures and grew popular at universities because AT&T made the Unix source code available for students to study. By the 1980s, Unix had spawned a generation of workstations -- and displaced many existing operating systems.

Meanwhile, the first desktop computers arrived in the mid-1970s with OSs that were little more than the monitors of 20 years earlier. When IBM began selling PCs in 1981, it offered several OSes -- but the least expensive and most popular was PC DOS, provided by a small company named Microsoft Corp.

Microsoft soon dominated PC operating systems, steadily borrowing features from its competitors, such as the Windows graphical user interface cribbed from Apple Computer's Macintosh. Microsoft also offered Xenix, the most popular PC version of Unix, and worked with IBM to develop a multi-tasking PC system, OS/2, in 1987. But three years later, the IBM/Microsoft partnership fell apart, and Microsoft merged its OS/2 work with its popular Windows to create Windows NT in 1993.

Finnish student Linus Torvalds wasn't trying to compete with Microsoft in 1991 when he began work on a Unix clone he called Linux. After finishing a first version, Torvalds asked for help from other programmers on the Internet. By 1994, Linux was a full-scale, free operating system. By 1999, it ran on more Internet Web servers than Microsoft OSs—and was Microsoft's most significant competition.

Today, Linux runs on everything from handhelds to mainframes, while versions of Windows span nearly the same range. How competition—and interoperability—between Windows and Linux develops may shape the future of enterprise computing.