Deploying protocols
Deployment of protocol services is performed on a subset of the cluster nodes that have been designated as protocol nodes.
Protocol nodes will have an additional set of packages installed that allow them to run the NFS, SMB and Object protocol services.
Data is served via these protocols from a pool of addresses designated as Export IP addresses or CES "public" IP addresses. The allocation of addresses in this pool will be managed by the cluster, and IP addresses will be automatically migrated to other available protocol nodes in the event of a node failure.
Before deploying protocols, there must be a GPFS™ cluster that has GPFS started and at least one file system for the CES shared file system.
- Only nodes running RHEL 7.0 and 7.1 on x86_64 and ppc64 architectures can be designated as protocol nodes.
- The packages for all protocols will be installed on every node designated as a protocol node; this is done even if a service is not enabled in your configuration.
- Services are enabled and disabled cluster wide; this means that every protocol node will serve all enabled protocols.
- If SMB is enabled, the number of protocol nodes is limited to 16 nodes.
- The spectrumscale installation toolkit no longer supports adding protocol nodes to an existing ESS cluster prior to ESS version 3.5.
Defining a shared file system
The spectrumscale config protocols command can be used to define the shared file system (-f) and mount point (-m):
usage: spectrumscale config protocols [-h] [-l] [-f FILESYSTEM]
[-m MOUNTPOINT]
For example: $ ./spectrumscale config protocols
-f cesshared -m /gpfs/cesshared.To show the current settings, issue this command:
$ ./spectrumscale config protocols --list
[ INFO ] No changes made. Current settings are as follows:
[ INFO ] Shared File System Name is cesshared
[ INFO ] Shared File System Mountpoint is /gpfs/cesshared
Adding nodes to the cluster definition file
To deploy protocols on nodes in your cluster, they must be added to the cluster definition file as protocol nodes.
Run the following command to designate a node as a protocol node:
./spectrumscale node add NODE_IP -p
Enabling protocols
In order to enable or disable a set of protocols, the spectrumscale enable and spectrumscale disable commands should be used. For example:
$ ./spectrumscale enable smb nfs
[ INFO ] Enabling SMB on all protocol nodes.
[ INFO ] Enabling NFS on all protocol nodes.
The current list of enabled protocols is shown as part of the spectrumscale node list command output; for example:
$ ./spectrumscale node list
[ INFO ] List of nodes in current configuration:
[ INFO ] [Installer Node]
[ INFO ] 9.71.18.169
[ INFO ]
[ INFO ] [Cluster Name]
[ INFO ] ESDev1
[ INFO ]
[ INFO ] [Protocols]
[ INFO ] Object : Disabled
[ INFO ] SMB : Enabled
[ INFO ] NFS : Enabled
[ INFO ]
[ INFO ] GPFS Node Admin Quorum Manager NSD Server Protocol
[ INFO ] ESDev1-GPFS1 X X X X
[ INFO ] ESDev1-GPFS2 X X
[ INFO ] ESDev1-GPFS3 X X
[ INFO ] ESDev1-GPFS4 X X X X
[ INFO ] ESDev1-GPFS5 X X X X
Configuring Object
If the object protocol is enabled, further protocol-specific configuration is required; these options are configured using the spectrumscale config object command, which has the following parameters:
usage: spectrumscale config object [-h] [-l] [-f FILESYSTEM] [-m MOUNTPOINT]
[-e ENDPOINT] [-o OBJECTBASE]
[-i INODEALLOCATION] [-t ADMINTOKEN]
[-au ADMINUSER] [-ap ADMINPASSWORD]
[-SU SWIFTUSER] [-sp SWIFTPASSWORD]
[-dp DATABASEPASSWORD]
[-mr MULTIREGION] [-rn REGIONNUMBER]
[-s3 {on,off}]
The object protocol requires a dedicated fileset as its back-end storage; this fileset is defined using the --filesystem (-f), --mountpoint(-m) and --objectbase (-o) flags to define the file system, mount point, and fileset respectively.
The --endpoint(-e) option specifies the host name that will be used for access to the file store. This should be a round-robin DNS entry that maps to all CES IP addresses; this will distribute the load of all keystone and object traffic that is routed to this host name. Therefore, the endpoint is an IP address in a DNS or in a load balancer that maps to a group of export IPs (that is, CES IPs that were assigned on the protocol nodes) .
The following user name and password options specify the credentials used for the creation of an admin user within Keystone for object and container access. The system will prompt for these during spectrumscale deploy pre-check and spectrumscale deploy if they have not already been configured by spectrumscale. The following example shows how to configure these options to associate user names and passwords: ./spectrumscale config object -au -admin -ap -dp
The -ADMINUSER(-au) option specifies the admin user name.
The -SWIFTUSER(-su) option specifies the Swift user name.
The -MULTIREGION(-mr) option enables the multi-region object deployment feature. The -REGIONNUMBER(-rn) option specifies the region number.
The -s3 option specifies whether the S3 (Amazon Simple Storage Service) API should be enabled.
-t ADMINTOKEN sets the admin_token property in the keystone.conf file which allows access to Keystone by token value rather than user/password. When installing with a local Keystone, by default the installer dynamically creates the admin_token used during initial configuration and deletes it when done. If set explicitly with -t, admin_token is not deleted from keystone.conf when done. The admin token can also be used when setting up a remote Keystone server if that server has admin_token defined.
Adding export IPs
Export IPs or CES "public" IPs are used to export data via the protocols (NFS, SMB, Object). File and Object clients use these public IPs to access data on GPFS file systems. Export IPs are shared between all protocols and are organized in a public IP pool (there can be fewer public IPs than protocol nodes).
- To add Export IPs to your cluster, run either this command:
$ ./spectrumscale config protocols --export-ip-pool EXPORT_IP_POOL
Or this command:
$ ./spectrumscale config protocols -e EXPORT_IP_POOL
Within these commands, EXPORT_IP_POOL is a comma-separated list of IP addresses.
- To view the current configuration, run the following command:
$ ./spectrumscale node list
To view the CES shared root and the IP pool, run the following command:$ ./spectrumscale config protocols -l
To view the Object configuration, run the following command:$ ./spectrumscale config object -l
Running the spectrumscale deploy command
After adding the previously-described protocol-related definition and configuration information to the cluster definition file you can deploy the protocols specified in that file.
./spectrumscale deploy --pr
This
is not required, however, because spectrumscale deploy with
no argument will also run this.spectrumscale deploy
- Performs pre-deploy checks.
- Creates file systems and deploys protocols as specified in the cluster definition file.
- Performs post-deploy checks.
You can explicitly specify the --precheck (-pr) option to perform a dry run of pre-deploy checks without starting the deployment. Alternatively, you can specify the --postcheck (-po) option to perform a dry run of post-deploy checks without starting the deployment. These options are mutually exclusive.
$ /usr/lpp/mmfs/bin/mmlscluster --ces
What to do next
Upon completion of the tasks described in this topic, you will have deployed additional functionality to an active GPFS cluster. This additional functionality may consist of file systems, protocol nodes, specific protocols, and authentication. Although authentication can be deployed at the same time as protocols, we have separated the instructions for conceptual purposes. Continue with the setting up authentication step if you have not yet set up authentication. See the topic Setting up authentication for instructions on how to set up authentication.
- add file systems
- add protocol nodes
- enable additional protocols
- configure and enable authentication