Deploying protocols

Deployment of protocol services is performed on a subset of the cluster nodes that have been designated as protocol nodes.

Protocol nodes will have an additional set of packages installed that allow them to run the NFS, SMB and Object protocol services.

Data is served via these protocols from a pool of addresses designated as Export IP addresses or CES "public" IP addresses. The allocation of addresses in this pool will be managed by the cluster, and IP addresses will be automatically migrated to other available protocol nodes in the event of a node failure.

Before deploying protocols, there must be a GPFS™ cluster that has GPFS started and at least one file system for the CES shared file system.

Notes:
  1. Only nodes running RHEL 7.0 and 7.1 on x86_64 and ppc64 architectures can be designated as protocol nodes.
  2. The packages for all protocols will be installed on every node designated as a protocol node; this is done even if a service is not enabled in your configuration.
  3. Services are enabled and disabled cluster wide; this means that every protocol node will serve all enabled protocols.
  4. If SMB is enabled, the number of protocol nodes is limited to 16 nodes.
  5. The spectrumscale installation toolkit no longer supports adding protocol nodes to an existing ESS cluster prior to ESS version 3.5.

Defining a shared file system

To use protocol services, a shared file system must be defined. If the install toolkit is used to install GPFS, NSDs can be created at this time and, if associated with a file system, the file system will then be created during deployment. If GPFS has already been configured, the shared file system can be specified manually or by re-running the spectrumscale install command to assign an existing NSD to the file system. If re-running spectrumscale install, be sure that your NSD servers are compatible with the spectrumscale installation toolkit and contained within the clusterdefinition.txt file.
Note: Be aware that if you do not configure a protocol's shared file system, the install toolkit will automatically create one for you named ces_shared mounted at /ibm/ces_shared. Start of changeThis will work only if you have created at least one NSD and that NSD is not already assigned to a file system.End of change

The spectrumscale config protocols command can be used to define the shared file system (-f) and mount point (-m):

usage: spectrumscale config protocols [-h] [-l] [-f FILESYSTEM]
                                      [-m MOUNTPOINT]
For example: $ ./spectrumscale config protocols -f cesshared -m /gpfs/cesshared.

To show the current settings, issue this command:

$ ./spectrumscale config protocols --list
[ INFO  ] No changes made. Current settings are as follows:
[ INFO  ] Shared File System Name is cesshared
[ INFO  ] Shared File System Mountpoint is /gpfs/cesshared

Adding nodes to the cluster definition file

To deploy protocols on nodes in your cluster, they must be added to the cluster definition file as protocol nodes.

Run the following command to designate a node as a protocol node:

./spectrumscale node add NODE_IP -p 

Enabling protocols

In order to enable or disable a set of protocols, the spectrumscale enable and spectrumscale disable commands should be used. For example:

$ ./spectrumscale enable smb nfs
[ INFO  ] Enabling SMB on all protocol nodes.
[ INFO  ] Enabling NFS on all protocol nodes.

The current list of enabled protocols is shown as part of the spectrumscale node list command output; for example:

$ ./spectrumscale node list
[ INFO  ] List of nodes in current configuration:
[ INFO  ] [Installer Node]
[ INFO  ] 9.71.18.169
[ INFO  ]
[ INFO  ] [Cluster Name]
[ INFO  ] ESDev1
[ INFO  ]
[ INFO  ] [Protocols]
[ INFO  ] Object : Disabled
[ INFO  ] SMB : Enabled
[ INFO  ] NFS : Enabled
[ INFO  ]
[ INFO  ] GPFS Node                    Admin  Quorum  Manager  NSD Server  Protocol
[ INFO  ] ESDev1-GPFS1                   X       X       X                    X
[ INFO  ] ESDev1-GPFS2                                   X                    X
[ INFO  ] ESDev1-GPFS3                                   X                    X
[ INFO  ] ESDev1-GPFS4                   X       X       X          X
[ INFO  ] ESDev1-GPFS5                   X       X       X          X

Configuring Object

If the object protocol is enabled, further protocol-specific configuration is required; these options are configured using the spectrumscale config object command, which has the following parameters:

usage: spectrumscale config object [-h] [-l] [-f FILESYSTEM] [-m MOUNTPOINT]
                                   [-e ENDPOINT] [-o OBJECTBASE]
                                   [-i INODEALLOCATION] [-t ADMINTOKEN]
                                   [-au ADMINUSER] [-ap ADMINPASSWORD]
                                   [-SU SWIFTUSER] [-sp SWIFTPASSWORD]
                                   [-dp DATABASEPASSWORD]
                                   Start of change[-mr MULTIREGION] [-rn REGIONNUMBER]End of change
                                   [-s3 {on,off}]

The object protocol requires a dedicated fileset as its back-end storage; this fileset is defined using the --filesystem (-f), --mountpoint(-m) and --objectbase (-o) flags to define the file system, mount point, and fileset respectively.

The --endpoint(-e) option specifies the host name that will be used for access to the file store. This should be a round-robin DNS entry that maps to all CES IP addresses; this will distribute the load of all keystone and object traffic that is routed to this host name. Therefore, the endpoint is an IP address in a DNS or in a load balancer that maps to a group of export IPs (that is, CES IPs that were assigned on the protocol nodes) .

The following user name and password options specify the credentials used for the creation of an admin user within Keystone for object and container access. The system will prompt for these during spectrumscale deploy pre-check and spectrumscale deploy if they have not already been configured by spectrumscale. The following example shows how to configure these options to associate user names and passwords: ./spectrumscale config object -au -admin -ap -dp

The -ADMINUSER(-au) option specifies the admin user name.

The -ADMINPASSWORD(-ap) option specifies the password for the admin user.
Note: You will be prompted to enter a Secret Encryption Key which will be used to securely store the password. Choose a memorable pass phrase which you will be prompted for each time you enter the password.

The -SWIFTUSER(-su) option specifies the Swift user name.

The -SWIFTPASSWORD(-sp) option specifies the password for the Swift user.
Note: You will be prompted to enter a Secret Encryption Key which will be used to securely store the password. Choose a memorable pass phrase which you will be prompted for each time you enter the password
The -DATABASEPASSWORD(-dp) option specifies the password for the object database.
Note: You will be prompted to enter a Secret Encryption Key which will be used to securely store the password. Choose a memorable pass phrase which you will be prompted for each time you enter the password

Start of changeThe -MULTIREGION(-mr) option enables the multi-region object deployment feature. The -REGIONNUMBER(-rn) option specifies the region number.End of change

The -s3 option specifies whether the S3 (Amazon Simple Storage Service) API should be enabled.

-t ADMINTOKEN sets the admin_token property in the keystone.conf file which allows access to Keystone by token value rather than user/password. When installing with a local Keystone, by default the installer dynamically creates the admin_token used during initial configuration and deletes it when done. If set explicitly with -t, admin_token is not deleted from keystone.conf when done. The admin token can also be used when setting up a remote Keystone server if that server has admin_token defined.

Attention: Start of changeIf you want to use SELinux in the Enforcing mode, you must take that decision before proceeding with the deployment. Changing the SELinux mode after the deployment is not supported.End of change

Adding export IPs

Note: This is mandatory for protocol deployment.

Export IPs or CES "public" IPs are used to export data via the protocols (NFS, SMB, Object). File and Object clients use these public IPs to access data on GPFS file systems. Export IPs are shared between all protocols and are organized in a public IP pool (there can be fewer public IPs than protocol nodes).

Note: Export IPs must have an associated hostname and reverse DNS lookup must be configured for each.
  1. To add Export IPs to your cluster, run either this command:
    $ ./spectrumscale config protocols --export-ip-pool EXPORT_IP_POOL

    Or this command:

    $ ./spectrumscale config protocols -e EXPORT_IP_POOL

    Within these commands, EXPORT_IP_POOL is a comma-separated list of IP addresses.

  2. To view the current configuration, run the following command:
    $ ./spectrumscale node list
    To view the CES shared root and the IP pool, run the following command:
    $ ./spectrumscale config protocols -l
    To view the Object configuration, run the following command:
    $ ./spectrumscale config object -l

Running the spectrumscale deploy command

After adding the previously-described protocol-related definition and configuration information to the cluster definition file you can deploy the protocols specified in that file.

To perform deploy checks prior to deploying, use the spectrumscale deploy command with the --pr argument:
./spectrumscale deploy --pr
This is not required, however, because spectrumscale deploy with no argument will also run this.
Use the following command to deploy protocols:
spectrumscale deploy
Note: You will be prompted for the Secret Encryption Key that you provided while configuring object and/or authentication unless you disabled prompting.
This does the following:
  • Performs pre-deploy checks.
  • Creates file systems and deploys protocols as specified in the cluster definition file.
  • Performs post-deploy checks.

You can explicitly specify the --precheck (-pr) option to perform a dry run of pre-deploy checks without starting the deployment. Alternatively, you can specify the --postcheck (-po) option to perform a dry run of post-deploy checks without starting the deployment. These options are mutually exclusive.

After a successful deployment, you can verify the cluster and CES configuration by running this command:
$ /usr/lpp/mmfs/bin/mmlscluster --ces

What to do next

Upon completion of the tasks described in this topic, you will have deployed additional functionality to an active GPFS cluster. This additional functionality may consist of file systems, protocol nodes, specific protocols, and authentication. Although authentication can be deployed at the same time as protocols, we have separated the instructions for conceptual purposes. Continue with the setting up authentication step if you have not yet set up authentication. See the topic Setting up authentication for instructions on how to set up authentication.

You can rerun the spectrumscale deploy command in the future to do the following:
  • add file systems
  • add protocol nodes
  • enable additional protocols
  • configure and enable authentication