A few years back, when 1 gigabyte was a big disk, Unix system administration was a constant juggling act, trying to stay one step ahead of the users’ seemingly insatiable need for more disk space. Now, of course, with 120Gb disks commonplace on desktop workstations and 600Gb disks appearing on servers such worries are a thing of the past, right?
Wrong! If there’s one thing we’ve learned over the years, it’s that you can never have too much disk space. In fact it often seems you can never have enough. You may have terabytes of capacity on your servers, but demand has more than kept pace with supply where disk space is concerned and managing disk space usage efficiently has never been more critical.
So what’s the best way to go about it? There are in essence three components to a complete disk space management policy:
(i) The disk partitioning strategy, to make sure each file system has the space it needs, (ii) Disk quotas to give every user a fair share of the resources, and (iii) Moment-by-moment monitoring of disk space in order to anticipate requirements and cope with resource shortages as soon as they arise.
Modern data storage technologies such as SANs, RAID farms or near-line tape stores can change the details but the basic pattern remains the same. Ultimately any resource has a limited capacity and must be managed to provide the best possible service to its users.
Let’s look at each of these three elements in turn…
Disk Partitioning Strategies
Unlike Windows, where there is usually just a single C: drive containing everything, Unix (and Linux) divides up disk space into separate filesystems, each with its own particular function, such as /home for users and /usr/local for applications. In principle each file system should reside in its own partition on the disk(s). This has the obvious advantage that disk space originally assigned to crucial file systems cannot be eroded by activities in other file systems.
The way you partition a disk depends on the flavour of Unix (ah, good old Unix! – everything’s standard, except for the things that aren’t…) – with Solaris, for example, you would use the “p” option in format, Linux has fdisk, inherited from MS-DOS, while Irix offers fx.
Whatever the name of the disk partition utility, though, there are three possible approaches to partitioning:
There must always be at least two partitions (the swap partition and one other), but it’s possible to run a Unix system in the Windows fashion, with a single partition containing everything other than the swap space. This isn’t recommended since you run the risk that the steady erosion of disk space in /home and /tmp will leave no space for the operating system.
The classic Unix approach is to use separate partitions for each major file system, the root directory, /usr, /opt, /usr/local, /var, /tmp, /home and so on. This has the advantage that filling up or corrupting one partition leaves the others unaffected, improving system reliability. It also make sense where there are several disks on a system. On the other hand it involves some arbitrary decisions about how much space to allow for each partition. It can also be inflexible and inefficient if partition sizes are set wrong.
Be aware, too, that having multiple partitions will set up ordering dependencies when mounting disks.
An alternative gaining increasing favour in recent years is to have just a few partitions – a pair of small partitions for the Unix (or Linux) installation and the swap space and a somewhat larger partition for applications (/usr/local, /opt etc.), with the bulk of the disk space in a single large partition containing all user files (/home and any mass storage).
In practice the choice of partitioning strategy may be influenced as much by network architecture as by any system administration considerations. Directories such as /usr/local and /home may be located on network drives leaving only the Unix installation and the swap partition on the local disk.
Once the disk(s) is(are) partitioned, changing the partitioning is difficult. Thus, deciding how to partition the disks is a strategic decision, which must be made once and for all. The other two elements of disk management policy are very much day-by-day system administration issues. We will look at them in Part II of this article.