Peter Bray
Telstra Corporation Limited
Peter.Bray@ind.tansu.com.au
AutoInstall was originally developed to automate the installation of Telstra Intelligent Network sites, which comprise as many as thirty hosts, each with specific hardware, operating system and software configuration. AutoInstall can install an entire site of thirty machines in under two hours, completely unattended. The general nature of AutoInstall allows automated, reproducible installation of any types of system.
New installations require new scripts, and each of these would again encode the various fixed locations required for the installation to proceed. Separation of the important information into configuration files helps, but these locations are still visible to the script developer as they are scattered through the code.
AutoInstall uses high-level directives to separate the definition of the function required from the implementation of that function. These directives are independent of the flavour of Unix being installed and AutoInstall understands the concepts of installing software on a local host, installing software on a mounted client system, and testing the configuration on an installation server prior to installation on a live host.
AutoInstall reduces the requirement for installation documentation as the directives define the steps required and the audit logs confirm that all steps were completed. The logging subsystem collects the output from each of the directives and commands. It also keeps track of any command or directive failures and notes these in the summary log, which will be empty for a successful installation.
In a production environment it is often faster to replace a faulty system disk and AutoInstall the host than it would be to call a trained systems administrator to diagnose the problem and suggest a solution. The faulty disks can be removed for analysis while the host is returned to service.
AutoInstall directive files use location independent references to files. AutoInstall determines the correct file by applying a search path, which starts with information about a particular host, progresses through the operating system and the operating system family, and ends with common configuration. The use of high-level directive files and the implied search path allows an installer to concentrate on what needs to be installed, not how this is performed for a particular operating system.
The following example is a simplified AutoInstall directive file for Oracle database servers. The configuration avoids references to individual hostnames or sites through the use of AutoInstall variables. The implementation of the referenced directive files is not shown here as they are not of concern to the developer of this directive file. This separation is a major benefit of the modular approach taken.
# Section 1: Host configuration require OSPatches require DNSClient suppress Software/pine require StandardUtilities append /etc/hosts--${GenericConfig::Hostname} replace /etc/hosts.allow--${GenericConfig::Hostname} 0444 root root edit /etc/inittab-ConsoleXterms # Section 2: Account management rootpasswd root-${GenericConfig::Site} addaccounts passwd-${GenericConfig::Site} group-${GenericConfig::Site} # Section 3: Disk configuration require DiskSuite-4.1 reboot partition c1t[0-5]d[0-4] OnePartition replace /etc/opt/SUNWmd/md.conf--OneBigSSA 0444 root root makefilesystem /dev/md/rdsk/d40 /export/md/d40 1775 root root loopback /export/md/d40/bigdb /bigdb # Section 4: Oracle and database creation require Oracle-7.3.2 shellscript CreateDB /bigdbIn section one the operating system patches for this particular operating system are installed. There is no reference to the specific set of patches, as the AutoInstall search path determines the correct OSPatches file to use. The machine is then configured as a DNSClient and standard utilities are installed, except for pine, which has been explicitly suppressed. The
/etc/hosts
and /etc/hosts.allow
files
are configured with information specific to the host being
installed through the use of the
${GenericConfig::Hostname}
variable. The final
directive in this section changes the console terminal type to
xterms.
In section two various accounts are created, and the root
password is set. All of the referenced files use the
${GenericConfig::Site}
variable so that separate
accounts and root passwords are created for each site.
Section three provides configuration of the disks on the host. This section of the AutoInstall configuration is currently operating system specific as DiskSuite-4.1 is only supported on Solaris. The DiskSuite-4.1 directive file installs DiskSuite, relevant patches, and increases the number of metadevices available. The host is then rebooted which allows DiskSuite metadevices to be used in further directives. AutoInstall continues automatically with the directive following the reboot.
The next directive partitions all of the disks in a
SPARCStorageArray on controller one using a partition file
called OnePartition. The next three directives provide
configuration of DiskSuite to create a single huge filesystem
mounted on /bigdb
.
The final section installs Oracle and then calls an external
shell script to create a database under the
/bigdb
. The Oracle-7.3.2
directive
file installs Oracle and its startup files, adds the
appropriate shared memory configuration options to the kernel
and reboots the host.
The use of the search path for all file references becomes
very powerful when applied to directive files. For example,
many projects might require database configuration, but each
project might have specific changes to that configuration. The
directive file Database-Client
might look
something like:
require Database-Client-OSPatches require Database-Client-Software require Database-Client-ConfigSince the search path is applied to all file references, any or all of the directive files can be specialised for each project, or even each individual host. For example, the files actually used in the configuration might be:
Configuration/Common/Directives/Database-Client Configuration/SunOS/5.5.1/sparc/Directives/Database-Client-OSPatches Configuration/SunOS/5.X/Directives/Database-Client-Software Projects/MyProject/Configuration/Common/Directives/Database-Client-ConfigIf the hierarchy is designed in a modular and reusable way, the project specific trees become quite small and provide very accurate documentation on the specifics of the project. Moving to SunOS 5.6 might simply require the creation of the directive file
Configuration/SunOS/5.6/sparc/Directives/Database-Client-OSPatches
.
Note that this has not changed support for earlier versions such
as 5.5.1, just added support for 5.6.
As an example of the power of search paths, a platform under construction might need a utility like sysinfo, which on SunOS 4.x requires a version for each kernel architecture, but on Solaris has a single version for all kernel architectures. The following directive could be used to install the correct version of sysinfo-3.3
tarpackage /pkgs/sysinfo-3.3 sysinfo-3.3The directive finds the appropriate version of
sysinfo-3.3.tar
(optionally compressed or gzipped)
in the AutoInstall hierarchy and installs it on the client into
/pkgs/sysinfo-3.3
.
The search path is constructed from information about the host to be installed. A (greatly simplified) search path is shown below, and is applied from most-specific (Hostname) to least-specific (Common - used when no other match has been found):
Hostname (e.g. myhost.some.dom) Domainname (e.g. some.dom) Kernel Architecture (e.g. sun4u) Processor (e.g. sparc) OS Version (e.g. SunOS/5.5.1) OS Family (e.g. SunOS/5.X) OS Common (e.g. SunOS/Common) Common (e.g. Common)The search path can be significantly more complex than this as AutoInstall supports projects and sub-projects, each of which could provide hierarchies similar to the above. The important point is that the search path is applied to all file references, allowing complex hierarchies to be developed, without changing the directives.
Since sysinfo is architecture and operating system specific, the developer would need to place the appropriate tar archive in the relevant directory, as shown below:
OS Pathname SunOS 4.1.3_U1 (sun4m) SunOS/4.1.3_U1/sun4m/TarPackages/sysinfo-3.3.tar.gz SunOS 4.1.3_U1 (sun4c) SunOS/4.1.3_U1/sun4c/TarPackages/sysinfo-3.3.tar.gz SunOS 5.5.1 (SPARC) SunOS/5.5.1/sparc/TarPackages/sysinfo-3.3.tar.gz SunOS 5.5.1 (Intel) SunOS/5.5.1/x86/TarPackages/sysinfo-3.3.tar.gzNote that the directive would fail if this configuration were used or tested for a sun4d machine running SunOS 4.1.3_U1 as no tar file exists for the sun4d kernel architecture. AutoInstall does not remove the need to compile each relevant version, but greatly reduces the effort required to install the software on many differing systems. All that is required to make this configuration work for sun4d machines is to place the appropriate tar file in the relevant directory.
Directive File Directive File to search for TerminalServer append /etc/services--erpcd etc_services--erpcd Bootpd append /etc/services--bootpd etc_services--bootpd
AutoInstall is written in PERL and provides several major
hooks for users to extend or customise its operation. Each
project or sub-project can have a module called
LocalModules.pm
, which can do simple
configuration such as defining sites and variables, or
modifying the search path. For the more adventurous it can add
or replace directives or provide additional PreInstall and
PostInstall configuration.
We have attempted to keep site-specific code out of
AutoInstall, and promote the use of
LocalModules.pm
. In our own case, we define our
host naming convention, and add some site-specific
directives. For example, we have added tests to allow a system
to be reinstalled while preserving the database disks. These
tests replace standard directives with directives which know
how to preserve the data on the database disks while allowing
reinstallation of the system disks.
testBuild
a
configuration on the installation server. Directives understand
that they are being called in a testing mode, and do not modify
any files. By setting environment variables it is possible to
test installation of machines which have a different operating
system or machine architecture from the installation server.
For example,
TARGET_OS=SunOS TARGET_OSREV=4.1.3_U1 testBuild mySunOSbox.some.dom TARGET_PROCESSOR=x86 TARGET_OSREV=5.6 testBuild mySolarisPC.some.domThe ability to test the configuration without requiring additional hardware allows a separation between installation developers and system installers. Once the installation is fully tested the final hardware can be assembled and installed without intervention. AutoInstall directive files are small text files, and so can be easily versioned and compared. Full audit logs are left on the installed platform and can be mailed to a central site as part of the install.
AutoInstall has numerous benefits when it comes to system
testing. System installs prior to AutoInstall required a 300+
page installation manual, and took at least a day per
machine. System installs for Sun SPARC machines now require
the installer to type boot net - install
and to
check the installation log once the install is finished, in
approximately one hour. A typical machine has over 350
directives applied to it after the operating system has been
installed, and each of these directives embodies many lower
level concepts. For example, the graft
directive
creates a directory, unrolls a tarpackage, changes the
installed permissions and then calls the graft command.
Complete installation testing is now a possibility. System testers can install the machine afresh after all of the developer "fixes" have been applied and tested to ensure that the delivered system is the one that was tested, and that all such "fixes" make it into a formal release. Machines can be identically configured at all sites, including the testbed, so that problems found at one site can be checked and fixed at other sites with certainty.
It is also possible to have reconfigurable testbeds. If a common hardware base is chosen it is feasible to rebuild the testbed for each project as it goes into system testing, and switch between testbed environments through a simple hands-off install. This is particularly useful for retrofitting patches to earlier releases as the testbed can be returned to the state it was in when the application was released.
A rich and reusable base configuration allows rapid development of configurations for new operating systems and one-off machines. It also enforces standards across platforms, which eases administration and support. If the base configuration is suitably rich, it is possible to have extremely small project configuration trees, which have only truly project-specific changes.
An example of such a design might be:
PlatformIndependentBaseline require VendorOS # Top-level wrapper require VendorOS-Patches # Patch set for particular OS require VendorOS-BasicSecurity # Tighten default vendor security require VendorOS-AdditionalTools # Add vendor tools not in the OS require VendorOS-DisableUnusedFeatures # Remove unwanted baggage from OS require SystemAdministration require SystemAdministration-Basic # e.g. syslog policy require sendmail-removal require qmail-software # Install a good MTA require qmail-nullclient-config # A base - may well be overridden require xntp-software require xntp-client-config # IP addresses project specific require sudo-software require sudo-config # Sudo permissions project specific require CoreUtilities require bash-software # A decent universal shell require sysinfo-software require UserEnvironment-BasicTo use this infrastructure, a project might have a directive file
MyProject-Baseline
which would require
PlatformIndependentBaseline
and then perform project
specific customisations.
The configuration can be tested with testBuild
which will flag any missing files or incomplete
directives. The placement of directive files in operating
system family directories (e.g. SunOS/5.X) allows a standard
configuration for all of that family, which can then be
overridden for specific versions if required.
AutoInstall also provides support for "foreign" operating system installations. For example, SunOS is installed by booting the host across the network under Solaris 2.x and then restoring dumps of SunOS onto the disks. Once SunOS has been installed the same directive files are used for both Solaris and SunOS postinstalls.
The current version of AutoInstall provides full support for Solaris (SPARC and x86), SunOS and preliminary support for SCO and Linux. The AutoInstall hierarchy is installed onto an install server, and the hosts build from that hierarchy, using a read-only NFS mount. AutoInstall has a self-contained toolset which is compiled and used for the relevant architecture and operating system configuration.
A recent addition to the toolset has been a custom Solaris 2.5.1 CDROM, which installs the install server itself. The install server is installed with the customised CDROM and the AutoInstall tape, and installs itself without intervention. The custom CDROM builds the filesystems for AutoInstall, reads the tape, and calls AutoInstall to make any other required changes, including copying and patching the newly installed CDROM image for future installs.
One major benefit of the use of AutoInstall is the standardisation of platform installation across operating systems and architectures. Installing for a different architecture or operating system is a matter of compiling the tools and packages for the target version.
AutoInstall was designed for static installation of hosts. This is both a strength, as it guarantees a known state; and a weakness as it does not cover the full life cycle of deployed platforms. The installation of versioned packages and the use of graft simplifies this task. The Configs package provides a longer term view of platform maintenance, and an eventual merging of AutoInstall and Configs would provide support for the full lifecycle.
The pseudo-directives require
,
include
and suppress
are currently
executed during configuration parsing. It would be useful to
have these evaluated at runtime to allow diversions, or to
pass arguments to directive files, based on previous
configuration entries. For example, it would be useful to be
able to say:
require PlatformSupportScripts ${LocalConfig::PSSVersion}where
${LocalConfig::PSSVersion}
has been set at
some stage in the directives hierarchy. Even more generally, it
would be useful to have default settings of variables, as in the
following example from a hypothetical
PlatformSupportScripts
directive file:
setvariable PSSVersion $1 ${LocalConfig::PSSVersion} 0.0.1This would set
PSSVersion
to the first argument
passed to this directive file, or to the value of
${LocalConfig::PSSVersion}
if that is set, or to
0.0.1 as a default.
The workaround with the current parser is to write directive files using variables which must be set somewhere in the directive hierarchy or the parser will complain. This method of writing directive files exposes too much internal detail to the end-user as top-level directive files end up as:
setvariable SomeSoftwareVersion 1.2.1 setvariable ThisSoftwareVersion 3.5.0 setvariable ThatSoftwareVersion 2.4.3 require MyProject-BaselineThe other major area of work would be the generalisation of disk, filesystem and mirroring configuration in an operating system independent manner. A metaconfiguration file is required which describes the relationship between disk slices, slice size, filesystems, mirrors and stripes. Much of the supporting infrastructure already exists, but work is required to provide an operating system independent view of these facilities. This information could also be used to generate profiles for the initial operating system installation, for use by JumpStart, KickStart or similar.
The same directive files are used for automated installation and non-destructive testing and, if designed correctly, for all operating systems and architectures. The standardised error reporting provides a consistency which is difficult to achieve in discrete installation scripts and provides this consistency across architectures and operating systems. The powerful search path mechanism allows generic functionality to be abstracted into common directive files which can then be used in project specific specialisation.
Reproducible, verifiable, hands off installation provides a major boost to the reliable installation of a site. AutoInstall can install a network of thirty machines, including all patches, software and host configuration without intervention, in less time and more reliably than it would take to install a single machine manually.
Peter Bray is an analyst/programmer specialising in infrastructure development on UNIX systems. His computing interests include operating systems, programming languages and object oriented design and analysis. He enjoys studying and specifying the architectural issues of complex systems. His email address is Peter.Bray@ind.tansu.com.au
Configs - Host configuration tool by Simon Gerraty - http://www.quick.com.au/FreeWare/configs.html