Добавил:
Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:
Burgess M.Principles of network and system administration.2004.pdf
Скачиваний:
181
Добавлен:
23.08.2013
Размер:
5.65 Mб
Скачать

4.7. SOFTWARE INSTALLATION

131

In RedHat Linux, a similar mechanism looks like this:

rpm -ivh package.rpm

Disks can be mirrored directly, using some kind of cloning program. For instance, the Unix tape archive program (tar) can be used to copy the entire directory tree of one host. In order to make this work, we first have to perform a basic installation of the OS, with zero packages and then copy over all remaining files which constitutes the packages we require. In the case of the Debian system above, there is no advantage to doing this, since the package installation mechanism can do the same job more cleanly. For example, with a GNU/Linux distribution:

tar --exclude /proc --exclude /lib/libc.so.5.4.23 \ --exclude /etc/hostname --exclude /etc/hosts -c -v \ -f host-imprint.tar /

Note that several files must be excluded from the dump. The file /lib/libc.so. 5.4.23 is the C library; if we try to write this file back from backup, the destination computer will crash immediately. /etc/hostname and /etc/hosts contains definitions of the hostname of the destination computer, and must be left unchanged. Once a minimal installation has been performed on the destination host, we can access the tar file and unpack it to install the image:

(cd / ; tar xfp /mnt/dump/my-machine.tar; lilo)

Afterwards, we have to install the boot sector, with the lilo command. The cloning of Unix systems has been discussed in refs. [297, 339].

Note that Windows systems cannot be cloned without special software (e.g. Norton Ghost or PowerQuest Drive Image). There are fundamental technical reasons for this. One is the fact that many host parameters are configured in the impenetrable system registry. Unless all of the hardware and software details of every host are the same, this will fail with an inconsistency. Another reason is that users are registered in a binary database with security IDs which can have different numerical values on each host. Finally domain registration cannot be cloned. A host must register manually with its domain server. Novell Zenworks contains a cloning solution that ties NDS objects to disk images.

4.7 Software installation

Most standard operating system installations will not leave us in possession of an immediately usable system. We also need to install third party software in order to get useful work out of the host. Software installation is a similar problem to that of operating system installation. However, third party software originates from a different source than the operating system; it is often bound by license agreements and it needs to be distributed around the network. Some software has to be compiled from source. We therefore need a thoughtful strategy for dealing with software. Specialized schemes for software installation were discussed in refs. [85, 199] and a POSIX draft was discussed in ref. [18], though this idea has not been developed into a true standard. Instead, de-facto and proprietary standards have emerged.

132

CHAPTER 4. HOST MANAGEMENT

4.7.1Free and proprietary software

Unlike most other popular operating systems, Unix grew up around people who wrote their own software rather than relying on off-the-shelf products. The Internet now contains gigabytes of software for Unix systems which cost nothing. Traditionally, only large companies like the oil industry and newspapers could afford off-the-shelf software for Unix.

There are therefore two kinds of software installation: the installation of software from binaries and the installation of software from source. Commercial software is usually installed from a CD by running an installation program and following the instructions carefully; the only decision we need to make is where we want to install the software. Free software and open source software usually come in source form and must therefore be compiled. Unix programmers have gone to great lengths to make this process as simple as possible for system administrators.

4.7.2Structuring software

The first step in installing software is to decide where we want to keep it. We could, naturally, locate software anywhere we like, but consider the following:

Software should be separated from the operating system’s installed files, so that the OS can be reinstalled or upgraded without ruining a software installation.

Unix-like operating systems have a naming convention. Compiled software can be collected in a special area, with a bin directory and a lib directory so that binaries and libraries conform to the usual Unix conventions. This makes the system consistent and easy to understand. It also keeps the program search PATH variable simple.

Home-grown files and programs which are special to our own particular site can be kept separate from files which could be used anywhere. That way, we define clearly the validity of the files and we see who is responsible for maintaining them.

The directory traditionally chosen for installed software is called /usr/local. One then makes subdirectories /usr/local/bin and /usr/local/lib and so on [147]. Unix has a de-facto naming standard for directories which we should try to stick to as far as reason permits, so that others will understand how our system is built up.

bin Binaries or executables for normal user programs.

sbin Binaries or executables for programs which only system administrators require. Those files in /sbin are often statically linked to avoid problems with libraries which lie on unmounted disks during system booting.

lib Libraries and support files for special software.

etc Configuration files.

4.7. SOFTWARE INSTALLATION

133

share Files which might be shared by several programs or hosts. For instance, databases or help-information; other common resources.

/usr/local

 

 

bin/ lib/ etc/ sbin/ share/

gnu/

site/

bin/ lib/ etc/ sbin/ share/

bin/ lib/ etc/ sbin/ share/

Figure 4.1: One way of structuring local software. There are plenty of things to criticize here. For instance, is it necessary to place this under the traditional /usr/local tree? Should GNU software be underneath /usr/local? Is it even necessary or desirable to formally distinguish GNU software from other software?

One suggestion for structuring installed software on a Unix-like host is shown in figure 4.1. Another is shown in figure 4.2. Here we divide these into three categories: regular installed software, GNU software (i.e. free software) and sitesoftware. The division is fairly arbitrary. The reason for this is as follows:

/usr/local is the traditional place for software which does not belong to the OS. We could keep everything here, but we will end up installing a lot of software after a while, so it is useful to create two other sub-categories.

GNU software, written by and for the Free Software Foundation, forms a self-contained set of tools which replace many of the older Unix equivalents, like ls and cp. GNU software has its own system of installation and set of standards. GNU will also eventually become an operating system in its own right. Since these files are maintained by one source it makes sense to keep them separate. This also allows us to place GNU utilities ahead of others in a user’s command PATH.

Site-specific software includes programs and data which we build locally to replace the software or data which follows with the operating system. It also includes special data like the database of aliases for E-mail and the DNS tables for our site. Since it is special to our site, created and maintained by our site, we should keep it separate so that it can be backed up often and separately.

A similar scheme to this was described in refs. [201, 70, 328, 260], in a system called Depot. In the Depot system, software is installed under a file node called /depot which replaces /usr/local. In the depot scheme, separate directories are maintained for different machine architectures under a single file tree. This has the advantage of allowing every host to mount the same filesystem, but the disadvantage of making the single filesystem very large. Software is installed in

134

 

CHAPTER 4. HOST MANAGEMENT

 

/site

 

/software

/local

admin/

bin/

bin/

bin/

lib/

lib/

lib/

etc/

etc/

etc/

sbin/

sbin/

sbin/

share/

share/

share/

Figure 4.2: Another, more rational way of structuring local software. Here we drop the affectation of placing local modifications under the operating system’s /usr tree and separate it completely. Symbolic links can be used to alias /usr/local to one of these directories for historical consistency.

a package-like format under the depot tree and is linked in to local hosts with symbolic links. A variation on this idea from the University of Edinburgh was described in ref. [10], and another from the University of Waterloo uses a file tree /software to similar ends in ref. [273]. In the Soft environment [109], software installation and user environment configuration are dealt with in a combined abstraction.

4.7.3GNU software example

Let us now illustrate the GNU method of installing software which has become widely accepted. This applies to any type of Unix, and to Windows if one has a Unix compatibility kit, such as Cygwin or UWIN. To begin compiling software, one should always start by looking for a file called README or INSTALL. This tells us what we have to do to compile and install the software. In most cases, it is only necessary to type a couple of commands, as in the following example. When installing GNU software, we are expected to give the name of a prefix for installing the package. The prefix in the above cases is /usr/local for ordinary software, /usr/local/gnu for GNU software and /usr/local/site for site-specific software. Most software installation scripts place files under bin and lib automatically. The steps are as follows.

1.Make sure we are working as a regular, unprivileged user. The software installation procedure might do something which we do not agree with. It is best to work with as few privileges as possible until we are sure.

2.Collect the software package by ftp from a site like ftp.uu.net or ftp.funet.fi etc. Use a program like ncftp for painless anonymous login.

3.Unpack the file using tar zxf software.tar.gz, if using GNU tar, or gunzip software.tar.gz; tar xf software.tar if not.

4.Enter the directory which is unpacked, cd software.

4.7. SOFTWARE INSTALLATION

135

5.Type: configure --prefix=/usr/local/gnu. This checks the state of our local operating system and other installed software and configures the software to work correctly there.

6.Type: make.

7.If all goes well, type make -n install. This indicates what the make program will install and where. If we have any doubts, this allows us to make changes or abort the procedure without causing any damage.

8.Finally, switch to privileged root/Administrator mode with the su command and type make install. This should be enough to install the software. Note, however, that this step is a security vulnerability. If one blindly executes commands with privilege, one can be tricked into installing back-doors and Trojan horses, see chapter 11.

9.Some installation scripts leave files with the wrong permissions so that ordinary users cannot access the files. We might have to check that the files have a mode like 555 so that normal users can access them. This is in spite of the fact that installation programs attempt to set the correct permissions [287].

Today this procedure should be more or less the same for just about any software pick up. Older software packages sometimes provide only Makefiles which you must customize yourself. Some X11-based windowing software requires you to use the xmkmf X-make-makefiles command instead of configure. You should always look at the README file.

4.7.4Proprietary software example

If we are installing proprietary software, we will have received a copy of the program on a CD-ROM, together with licensing information, i.e. a code which activates the program. The steps are somewhat different.

1.To install from CD-ROM we must start work with root/Administrator privileges, so the authenticity of the CD-ROM should be certain.

2.Insert the CD-ROM into the drive. Depending on the operating system, the CD-ROM might be mounted automatically or not. Check this using the mount command with no arguments, on a Unix-like system. If the CD-ROM has not been mounted, then, for standard CD-ROM formats, the following will normally suffice:

mkdir

/cdrom

if necessary

mount

/dev/cdrom /cdrom

 

For some manufacturers, or on older operating systems, we might have to specify the type of filesystem on the CD-ROM. Check the installation instructions.

136

CHAPTER 4. HOST MANAGEMENT

3.On a Windows system a clickable icon appears to start the installation program. On a Unix-like system we need to look for an installation script

cd /cdrom/ cd-name less README

./install-script

4. Follow the instructions.

Some proprietary software requires the use of a license server, such as lmgrd. This is installed automatically, and we are required only to edit a configuration file with a license key which is provided, in order to complete the installation. Note however, that if we are running multiple licensed products on a host, it is not uncommon that these require different and partly incompatible license servers which interfere with one another. If possible, one should keep to only one license server per subnet.

4.7.5Installing shared libraries

Systems which use shared libraries or shared objects sometimes need to be reconfigured when new libraries are added to the system. This is because the names of the libraries are cached to provide fast access. The system will not look for a library if it is not in the cache file.

SunOS (prior to Solaris 2): After adding a new library, one must run the command ldconfig lib-directory. The file /etc/ld.so.cache is updated.

GNU/Linux: New library directories are added to the file /etc/ld.so.conf. Then one runs the command ldconfig. The file /etc/ld.so.cache is updated.

4.7.6Configuration security

In the preceding sections we have looked at some examples and suggestions for dealing with software installation. Let us now take a step back from the details to analyze the principles underlying these.

The first is a principle which we shall return to many times in this book. It is one of the key principles in computer science, and we shall be repeating it with slightly different words again and again.

Principle 15 (Separation III). Independent systems should not interfere with one another, or be confused with one another. Keep them in separate storage areas.

The reason is clear: if we mix up files which do not belong together, we lose track of them. They become obscured by a lack of structure. They vanish into anonymity. The reason why all modern computer systems have directories for grouping files, is precisely so that we do not have to mix up all files in one place. This was

4.7. SOFTWARE INSTALLATION

137

discussed in section 4.4.5. The application to software installation is clear: we should not ever consider installing software in /usr/bin or /bin or /lib or /etc or any directory which is controlled by the system designers. To do so is like lying down in the middle of a freeway and waiting for a new operating system or upgrade to roll over us. If we mix local modifications with operating system files, we lose track of the differences in the system, others will not be able to see what we have done. All our hard work will be for nothing when a new system is installed.

Suggestion 1 (Vigilance). Be on the lookout for software which is configured, by default, to install itself on top of the operating system. Always check the destination using make -n install before actually committing to an installation. Programs which are replacements for standard operating system components often break the principle of separation.a

aSoftware originating in BSD Unix is often an offender, since it is designed to be a part of BSD Unix, rather than an add-on, e.g. sendmail and BIND.

The second important point above is that we should never work with root privileges unless we have to. Even when we are compiling software from source, we should not start the compilation with superuser privileges. The reason is clear: why should we trust the source of the program? What if someone has placed a command in the build instructions to destroy the system, plant a virus or open a back-door to intrusion? As long as we work with low privilege then we are protected, to a degree, from problems like this. Programs will not be able to do direct and pervasive damage, but they might still be able to plant Trojan horses that will come into effect when privileged access is acquired.

Principle 16 (Limited privilege). No process or file should be given more privileges than it needs to do its job. To do so is a security hazard.

Another use for this principle arises when we come to configure certain types of software. When a user executes a software package, it normally gets executed with the user privileges of that user. There are two exceptions to this:

Services which are run by the system: Daemons which carry out essential services for users or for the system itself, run with a user ID which is independent of who is logged on to the system. Often, such daemons are started as root or the Administrator when the system boots. In many cases, the daemons do not need these privileges and will function quite happily with ordinary user privileges after changing the permissions of a few files. This is a much safer strategy than allowing them to run with full access. For example, the httpd daemon for the WWW service uses this approach. In recent years, bugs in many programs which run with root privileges have been exploited to give intruders access to the system. If software is run with a non-privileged user ID, this is not possible.

Unix setuid programs: Unix has a mechanism by which special privilege can be given to a user for a short time, while a program is being executed. Software

138

CHAPTER 4. HOST MANAGEMENT

which is installed with the Unix setuid bit set, and which is owned by root, runs with root’s special privileges. Some software producers install software with this bit set with no respect for the privilege it affords. Most programs which are setuid root do not need to be. A good example of this is the Common Desktop Environment (a multi-vendor desktop environment used on Unix systems). In a recent release, almost every program was installed setuid root. Within only a short time, a list of reports about users exploiting bugs to gain control of these systems appeared. In the next release, none of the programs were setuid root.

All software servers which are started by the system at boot time are started with root/Administrator privileges, but daemons which do not require these privileges can relinquish them by giving up their special privileges and running as a special user. This approach is used by the Apache WWW server and by MySQL for instance. These are examples of software which encourage us to create special user IDs for server processes. To do this, we create a special user in the password database, with no login rights (this just reserves a UID). In the above cases, these are usually called www and mysql. The software allows us to specify these user IDs so that the process owner is switched right after starting the program. If the software itself does not permit this, we can always force a daemon to be started with lower privilege using:

su -c ’command’ user

The management tool cfengine can also be used to do this. Note however that Unix server processes which run on reserved (privileged) ports 1–1023 have to be started with root privileges in order to bind to their sockets.

On the topic of root privilege, a related security issue has to do with programs which write to temporary files.

Principle 17 (Temporary files). Temporary files or sockets which are opened by any program, should not be placed in any publicly writable directory like /tmp. This opens the possibility of race conditions and symbolic link attacks. If possible, configure them to write to a private directory.

Users are always more devious than software writers. A common mistake in programming is to write to a file which ordinary users can create, using a privileged process. If a user is allowed to create a file object with the same name, then he or she can direct a privileged program to write to a different file instead, simply by creating a symbolic or hard link to the other file. This could be used to overwrite the password file or the kernel, or the files of another user. Software writers can avoid this problem by simply unlinking the file they wish to write to first, but that still leaves a window of opportunity after unlinking the file and before opening the new file for writing, during which a malicious user could replace the link (remember that the system time-shares). The lesson is to avoid making privileged programs write to directories which are not private, if possible.

4.7. SOFTWARE INSTALLATION

139

Before closing this section, a comment is in order. Throughout this chapter, and others, we have been advocating a policy of building the best possible, most logical system by tailoring software to our own environment. Altering absurd software defaults, customizing names and locations of files and changing user identities is no problem as long as everyone who uses and maintains the system is aware of this. If a new administrator started work and, unwittingly, reverted to those software defaults, then the system would break.

Principle 18 (Flagging customization). Customizations and deviations from standards should be made conspicuous to users and administrators. This makes the system easier to understand both for ourselves and our successors.

4.7.7When compilation fails

Today, software producers who distribute their source code are able to configure it automatically to work with most operating systems. Compilation usually proceeds without incident. Occasionally though, an error will occur which causes the compilation to halt. There are a few things we can try to remedy this:

A previous configuration might have been left lying around, try

make clean make distclean

and start again, from the beginning.

Make sure that the software does not depend on the presence of another package, or library. Install any dependencies, missing libraries and try again.

Errors at the linking stage about missing functions are usually due to missing or un-locatable libraries. Check that the

LD LIBRARY PATH

variable includes all relevant library locations. Are any other environment variables required to configure the software?

Sometimes an extra library needs to be added to the Makefile. To find out whether a library contains a function, we can use the following C-shell trick:

host% cd /lib

host% foreach lib ( lib* )

>echo Checking $lib ----------------------

>nm $lib | grep function

>end

Carefully try to patch the source code to make the code compile.

Check in news groups whether others have experienced the same problem.

Contact the author of the program.