Installing GPFS

After setting up your install node, you can use the spectrumscale installation toolkit to define nodes, NSDs, and file systems in the cluster definition file, and install GPFS™ according to the information in that file.

Steps for installing GPFS

After defining nodes in the cluster definition file (see Defining configuration options for the spectrumscale installation toolkit), you can set additional GPFS configuration information in that file. You use the spectrumscale config gpfs command to do this.

  1. To specify a cluster name in the cluster definition, use the -c argument:
    ./spectrumscale config gpfs -c gpfscluster01.my.domain.name.com

    If no cluster name is specified, the GPFS Admin Node name is used as the cluster name. If the user-provided name contains periods, it is assumed to be a fully qualified domain name. If the cluster name is not a fully qualified domain name, the cluster name domain name will be inherited from the Admin Node domain name.

  2. To specify a profile to be set on cluster creation in the cluster definition use the -p argument:
    ./spectrumscale config gpfs -p randomio

    The valid values for -p option are default (for gpfsProtocolDefaults profile) and random I/O (for gpfsProtocolRandomIO profile). The profiles are based on workload type : sequential I/O (gpfsProtocolDefaults) or random I/O (gpfsProtocolRandomIO). The defined profiles above can be used to provide initial default tunables/settings for a cluster. If additional tunable changes are required, see mmchconfig command and the mmcrcluster command in IBM Spectrum Scale: Administration and Programming Reference.

    If no profile is specified in the cluster definition, the gpfsProtocolDefaults profile will be automatically set on cluster creation.

  3. To specify the remote shell binary to be used by GPFS, use the -r argument:
    ./spectrumscale config gpfs -r /usr/bin/ssh

    If no remote shell is specified in the cluster definition, /usr/bin/ssh will be used as default.

  4. To specify the remote file copy binary to be used by GPFS use the -rc argument:
    ./spectrumscale config gpfs -rc /usr/bin/scp

    If no remote file copy binary is specified in the cluster definition, /usr/bin/scp will be used a default.

  5. To specify an ephemeral port range to be set on all GPFS nodes use the -e argument:
    spectrumscale config gpfs -e 70000-80000

    For information about the ephemeral port range, see GPFS port usage in IBM Spectrum Scale: Advanced Administration Guide.

    If no port range is specified in the cluster definition, 60000-61000 will be used as default.

  6. To view the current GPFS configuration settings, issue the following command:
    $ ./spectrumscale config gpfs --list
    [ INFO  ] No changes made. Current settings are as follows:
    [ INFO  ] GPFS cluster name is gpfscluster01
    [ INFO  ] GPFS profile is default
    [ INFO  ] Remote shell command is /usr/bin/ssh
    [ INFO  ] Remote file copy command is /usr/bin/scp
    [ INFO  ] GPFS Daemon communication port range is 60000-61000

To perform environment checks prior to running the install, use spectrumscale install with the -pr argument:

./spectrumscale install -pr

This is not required, however, as install with no arguments will also run this.

Understanding what the install toolkit does during a spectrumscale install
  • If the spectrumscale installation toolkit is being used to install GPFS on all nodes, create a new GPFS cluster, and create NSDs, it will automatically perform the steps outlined below.
  • If the spectrumscale installation toolkit is being used to add nodes to an existing GPFS cluster and/or create new NSDs, it will automatically perform the steps outlined below.
  • If all nodes in the cluster definition file are in a cluster, then the spectrumscale installation toolkit will automatically perform the steps outlined below.

To add nodes to an existing GPFS cluster, at least one node in the cluster definition must belong to the cluster where the nodes not in a cluster are to be added. The GPFS cluster name in the cluster definition must also exactly match the cluster name outputted by mmlscluster.

Once the spectrumscale install command has been issued, the toolkit will follow these flows:

To install GPFS on all nodes, create a new GPFS cluster, and create NSDs
  • Run pre-install environment checks
  • Install the GPFS packages on all nodes
  • Build the GPFS portability layer on all nodes
  • Install and configure performance monitoring tools
  • Create a GPFS cluster
  • Configure licenses
  • Set ephemeral port range
  • Create NSDs (if any are defined in the cluster definition)
  • Run post-install environment checks
To add nodes to an existing GPFS cluster and create any new NSDs
  • Run pre-install environment checks
  • Install the GPFS packages on nodes to be added to the cluster
  • Install and configure performance monitoring tools on nodes to be added to the cluster
  • Add nodes to the GPFS cluster
  • Configure licenses
  • Create NSDs (if any new NSDs are defined in the cluster definition)
  • Run post-install environment checks
If all nodes in the cluster definition are in a cluster
  • Run pre-install environment checks
  • Skip all steps until NSD creation
  • Create NSDs (if any new NSDs are defined in the cluster definition)
  • Run post-install environment checks
Note: Although not part of GPFS, NTP configuration on every node is useful for cluster operation and future debugging. You can use the ./spectrumscale ntp options for configuring NTP on all nodes at installation time. For more information, see spectrumscale command in IBM Spectrum Scale: Administration and Programming Reference.
What to do next
Upon completion of the installation, you will have an active GPFS cluster. Within the cluster, NSDs may have been created, performance monitoring will have been configured, and all product licenses will have been accepted. File systems will be fully created in the next step: deployment.
Install can be re-run in the future to:
  • add NSD server nodes
  • add GPFS client nodes
  • add GUI nodes
  • add NSDs
  • define new file systems for use with deploy