Manually installing IBM Spectrum Scale management GUI

The management GUI provides an easy way for the users to configure, manage, and monitor the IBM Spectrum Scale™ system.

You can install the management GUI by using the following methods:

Prerequisites

The prerequisites that are applicable for installing the IBM Spectrum Scale system through CLI is applicable for installation through GUI as well. For more information on the prerequisites for installation, see Installation prerequisites.

The Installation rpm that is part of the IBM Spectrum Scale GUI package is necessary for the installation. You need to extract this package to start the installation. The performance tool rpms are also required to enable the performance monitoring tool that is integrated into the GUI. The following rpms are required for performance monitoring tools in GUI:
  • The performance tool collector rpm. This rpm is placed only on the collector nodes.
  • The performance tool sensor rpm. This rpm is applicable for the sensor nodes, if not already installed.
The following table lists the IBM Spectrum Scale GUI and performance tool package that are required for different platforms.
Table 1. GUI packages required for each platform
Platform Package name
RHEL 7.x x86 gpfs.gui-4.2.0-X.el7.x86_64.rpm (X is the counter for the latest version starting with 1)
RHEL 7.x ppc64 (big endian) gpfs.gui-4.2.0-X.el7.ppc64.rpm (X is the counter for the latest version starting with 1)
RHEL 7.x ppc64le (little endian) gpfs.gui-4.2.0-X.el7.ppc64le.rpm (X is the counter for the latest version starting with 1)
SLES12 x86 gpfs.gui-4.2.0-X.sles12.x86_64.rpm (X is the counter for the latest version starting with 1)
SLES12 ppc64le (little endian) gpfs.gui-4.2.0-X.sles12.ppc64le.rpm (X is the counter for the latest version starting with 1)
RHEL 6.x x86 gpfs.gss.pmsensors-4.2.0-0.el6.x86_64gpfs.gss.pmcollector-4.2.0-0.el6.x86_64.rpm
RHEL 7.x X86 gpfs.gss.pmcollector-4.2.0-0.el7.x86_64.rpm

gpfs.gss.pmsensors-4.2.0-0.el7.x86_64.rpm

RHEL 6.x ppc64 gpfs.gss.pmcollector-4.2.0-0.el6.ppc64.rpmgpfs.gss.pmsensors-4.2.0-0.el6.ppc64
RHEL 7.x ppc64 gpfs.gss.pmcollector-4.2.0-0.el7.ppc64.rpm

gpfs.gss.pmsensors-4.2.0-0.el7.ppc64.rpm

RHEL 7.x ppc64 LE gpfs.gss.pmcollector-4.2.0-0.el7.ppc64le.rpm

gpfs.gss.pmsensors-4.2.0-0.el7.ppc64le.rpm

SLES12 X86 gpfs.gss.pmcollector-4.2.0-0.SLES12.x86_64.rpm

gpfs.gss.pmsensors-4.2.0-0.SLES12.X86_64.rpm

SLES12 ppc64 gpfs.gss.pmcollector-4.2.0-0.SLES12.ppc64.rpm

gpfs.gss.pmsensors-4.2.0-0.SLES12.ppc64.rpm

SLES12 ppc64 LE gpfs.gss.pmcollector-4.2.0-0.SLES12.ppc64le.rpm

gpfs.gss.pmsensors-4.2.0-0.SLES12.ppc64le.rpm

SLES11 ppc64 (sensor only) gpfs.gss.pmsensors-4.2.0-0.SLES11.ppc64.rpm
Debian sensor packages gpfs.gss.pmsensors_4.2.0-0.U14.04_amd64.deb

gpfs.gss.pmsensors_4.2.0-0.U12.04_amd64.deb

gpfs.gss.pmsensors_4.2.0-0.D7.6_amd64.deb

gpfs.gss.pmsensors_4.2.0-0.D6.0.10_amd64.deb

Ensure that the performance tool collector runs on the same node as the GUI.

Yum repository setup

You can use yum repository to manually install the GUI rpm files. This is the preferred way of GUI installation as yum checks the dependencies and automatically installs missing platform dependencies like the postgres module, which is required but not included in the package.

Installation steps

You can install the management GUI either using the package manager (yum or zypper commands) or by issuing the rpms individually.

Installing management GUI by using package manager (yum or zypper commands)

It is recommended to use this method as the package manager checks the dependencies and automatically installs missing platform dependencies. Issue the following commands to install management GUI:

Red Hat Enterprise Linux

yum install gpfs.gss.pmsensors-4.2.0-0.el7.<arch>.rpm
yum install gpfs.gss.pmcollector-4.2.0-0.el7.<arch>.rpm
yum install gpfs.gui-4.2.0-0.el7.<arch>.rpm

SLES

zypper install gpfs.gss.pmsensors-4.2.0-0.SLES12.<arch>.rpm
zypper install gpfs.gss.pmcollector-4.2.0-0.SLES12.<arch>.rpm
zypper install gpfs.gui-4.2.0-0.sles12.<arch>.rpm

Installing management GUI by using rpms

Issue the following commands:

Red Hat Enterprise Linux

rpm -ivh gpfs.gss.pmsensors-4.2.0-0.el7.<arch>.rpm 
rpm -ivh gpfs.gss.pmcollector-4.2.0-0.el7.<arch>.rpm
rpm -ivh gpfs.gui-4.2.0-0.el7.<arch>.rpm

SLES

rpm -ivh gpfs.gss.pmsensors-4.2.0-0.SLES12.<arch>.rpm
rpm -ivh gpfs.gss.pmcollector-4.2.0-0.SLES12.<arch>.rpm
rpm -ivh gpfs.gui-4.2.0-0-sles12.<arch>.rpm

The sensor rpm must be installed on any additional node that you want to monitor. All sensors must point to the collector node.

Note: The default user name and password to access the IBM Spectrum Scale management GUI is admin and admin001 respectively.

Enabling performance tools in management GUI

The performance tool is installed into /opt/IBM/zimon. The following important configuration files are available in this folder:

ZIMonSensors.cfg
This is the sensor configuration file and it controls which sensors are activated and also sets the reporting interval of each sensor. By setting the reporting interval to -1, a sensor is disabled. A positive number defines the reporting period in seconds. The smallest possible period is once per second.
ZIMonCollector.cfg
This is the collector configuration file and it defines the number of aggregation levels and the maximum amount of memory that is used. By default, three domains are created: a raw domain that stores the metrics uncompressed, a first aggregation domain that aggregates data to 1-minute averages, and a second aggregation domain that stores data in 15-minute averages. Each domain can be configured with the amount of memory that is used by the in-memory database and also the maximum file size and number of files that are used to store the data on disk.

The startup script of the sensor defines a list of collectors to which data is being sent. By default, the sensor unit reports to a collector that runs on localhost. If not, change the sensor configuration to point to the collector IP address.

To enable and initialize the performance tool in the management GUI, do the following:
  1. To initialize the performance tool, issue the systemctl start command as shown in the following example:

    On collector nodes: systemctl start pmcollector

    On all sensor nodes: systemctl start pmsensors

    If the performance tool is not configured on your cluster, the system displays the following error messages when you try to start pmsensors on the sensor nodes:

    Job for pmsensors.service failed. See "systemctl status pmsensors.service" and "journalctl -xn" for
    details.

    To resolve this problem, first configure the cluster for the performance tool by using the mmperfmon command. You also need to configure a set of collector nodes while issuing the command as shown in the following example:

    mmperfmon config generate --collectors [ipaddress/hostname of node1, ipaddress/hostname of node2, …]
  2. Enable the sensors on the cluster by using the mmchnode command. Issuing this command configures and starts the performance tool sensors on the nodes.

    Before issuing the mmchnode command, ensure that the pmsensors is already installed on all sensor nodes as given in the following example:
    mmchnode --perfmon -N [SENSOR_NODE_LIST]
    [SENSOR_NODE_LIST] is a comma-separated list of sensor nodes' host names or IP addresses.
    You can also manually configure the performance tools sensor nodes by editing the following file on all sensor nodes: /opt/IBM/zimon/ZIMonSensors.cfg . Add the host name or IP address of the node that hosts the collector in the following section for the configuration file:
    collectors = {
    host = "[HOSTNAME or IP ADDRESS]"
    port = "4739"
    }
    This specifies the collector to which the sensor is reporting.
  3. Start of changeTo show the file system capacity, update the GPFSDiskCap file to set frequency in which the capacity needs to be refreshed. You need to specify this value in seconds as shown in the following example:
    mmperfmon config update GPFSDiskCap.restrict=gui_node GPFSDiskCap.period=86400

    This sensor must be enabled only on a single node, preferably the GUI node. If this sensor is disabled, the GUI does not show any capacity data. The recommended period is 86400 which means once per day. Since this sensor runs mmdf, it is not recommended to use a value less than 10800 (every three hours) for GPFSDiskCap.period.

    End of change
  4. Start of changeEnable quota in the file system to get capacity data on filesets in the GUI. For information on enabling quota, see the mmchfs -q option in mmchfs command and mmcheckquota command in IBM Spectrum Scale: Administration and Programming Reference.End of change
  5. Start the sensor on every sensor node as shown in the following example:
    systemctl start pmsensors
  6. After configuring the performance tool, you can start the IBM Spectrum Scale management GUI as shown in the following example:
    systemctl start gpfsgui
  7. To make sure that the GUI and performance tool are started on the boot process, issue the following commands:
    systemctl enable gpfsgui.service 
    systemctl enable pmsensor.service 
    systemctl enable pmcollector.service 
    Start of change
    Note: The pmsensors and pmcollector scripts are SysV scripts. On systems that use systemd scripts, systemd redirects these scripts to chkconfig, and the following message will be displayed on the terminal:
    pmcollector.service is not a native service, redirecting to /sbin/chkconfig.
    Executing /sbin/chkconfig pmcollector on the unit files have no [Install] section. They are not meant to be enabled
    using systemctl.
    Possible reasons for having this kind of units are:
    1) A unit may be statically enabled by being symlinked from another unit's
    .wants/ or .requires/ directory.
    2) A unit's purpose may be to act as a helper for some other unit which has
    a requirement dependency on it.
    3) A unit may be started when needed via activation (socket, path, timer,
    D-Bus, udev, scripted systemctl call, ...).  
    End of change

    Start of changeThis is not an error message. It is used for information purpose only.End of change

Checking GUI and performance tool status

Issue the systemctl status gpfsgui command to know the GUI status as shown in the following example:
systemctl status gpfsgui.service
gpfsgui.service - IBM_GPFS_GUI Administration GUI
Loaded: loaded (/usr/lib/systemd/system/gpfsgui.service; disabled)
Active: active (running) since Fri 2015-04-17 09:50:03 CEST; 2h 37min ago
Process: 28141 ExecStopPost=/usr/lpp/mmfs/gui/bin/cfgmantraclient unregister (code=exited, s
tatus=0/SUCCESS)
Process: 29120 ExecStartPre=/usr/lpp/mmfs/gui/bin/check4pgsql (code=exited, status=0/SUCCESS)
Main PID: 29148 (java)
Status: "GSS/GPFS GUI started"
CGroup: /system.slice/gpfsgui.service
⋘─29148 /opt/ibm/wlp/java/jre/bin/java -XX:MaxPermSize=256m -Dcom.ibm.gpfs.platform=GPFS 
-Dcom.ibm.gpfs.vendor=IBM -Djava.library.path=/opt/ibm/wlp/usr/servers/gpfsgui/lib/ 
-javaagent:/opt/ibm/wlp/bin/tools/ws-javaagent.jar -jar /opt/ibm/wlp/bin/tools/ws-server.jar gpfsgui
--clean

Apr 17 09:50:03 server-21.localnet.com java[29148]: Available memory in the JVM: 484MB
Apr 17 09:50:03 server.localnet.com java[29148]: Max memory that the JVM will attempt to use: 512MB
Apr 17 09:50:03 server.localnet.com java[29148]: Number of processors available to JVM: 2
Apr 17 09:50:03 server.localnet.com java[29148]: Backend started.
Apr 17 09:50:03 server.localnet.com java[29148]: CLI started.
Apr 17 09:50:03 server.localnet.com java[29148]: Context initialized.
Apr 17 09:50:03 server.localnet.com systemd[1]: Started IBM_GPFS_GUI Administration GUI.
Apr 17 09:50:04 server.localnet.com java[29148]: [AUDIT ] CWWKZ0001I: Application / 
started in 6.459 seconds.
Apr 17 09:50:04 server.localnet.com java[29148]: [AUDIT ] CWWKF0012I: The server 
installed the following features: [jdbc-4.0, ssl-1.0, localConnector-1.0, appSecurity-2.0, 
jsp-2.2, servlet-3.0, jndi-1.0, usr:FsccUserRepo, distributedMap-1.0].
Apr 17 09:50:04 server-21.localnet.com java[29148]: [AUDIT ] CWWKF0011I: ==> When you see 
the service was started anything should be OK !

Issue the systemctl status pmcollector and systemctl status pmsensors commands to know the status of the performance tool.

You can also check whether the performance tool backend can receive data by using the GUI or alternative by using a command line performance tool that is called zc, which is available in /opt/IBM/zimon folder. For example:
echo "get metrics mem_active, cpu_idle, gpfs_ns_read_ops last 10 bucket_size 1" | ./zc 127.0.0.1
Result example:
1: server-21.localnet.com|Memory|mem_active
2: server-22.localnet.com|Memory|mem_active
3: server-23.localnet.com|Memory|mem_active
4: server-21.localnet.com|CPU|cpu_idle
5: server-22.localnet.com|CPU|cpu_idle
6: server-23.localnet.com|CPU|cpu_idle
7: server-21.localnet.com|GPFSNode|gpfs_ns_read_ops
8: server-22.localnet.com|GPFSNode|gpfs_ns_read_ops
9: server-23.localnet.com|GPFSNode|gpfs_ns_read_ops
Row Timestamp mem_active mem_active mem_active cpu_idle cpu_idle cpu_idle gpfs_ns_read_ops 
gpfs_ns_read_ops gpfs_ns_read_ops
1 2015-05-20 18:16:33 756424 686420 382672 99.000000 100.000000 95.980000 0 0 0
2 2015-05-20 18:16:34 756424 686420 382672 100.000000 100.000000 99.500000 0 0 0
3 2015-05-20 18:16:35 756424 686420 382672 100.000000 99.500000 100.000000 0 0 6
4 2015-05-20 18:16:36 756424 686420 382672 99.500000 100.000000 100.000000 0 0 0
5 2015-05-20 18:16:37 756424 686520 382672 100.000000 98.510000 100.000000 0 0 0
6 2015-05-20 18:16:38 774456 686448 384684 73.000000 100.000000 96.520000 0 0 0
7 2015-05-20 18:16:39 784092 686420 382888 86.360000 100.000000 52.760000 0 0 0
8 2015-05-20 18:16:40 786004 697712 382688 46.000000 52.760000 100.000000 0 0 0
9 2015-05-20 18:16:41 756632 686560 382688 57.580000 69.000000 100.000000 0 0 0
10 2015-05-20 18:16:42 756460 686436 382688 99.500000 100.000000 100.000000 0 0 0
Start of change

Node classes used for the management GUI

The IBM Spectrum Scale management GUI automatically creates the following node classes during installation:

  • GUI_SERVERS: Contains all nodes with a server license and all the GUI nodes
  • GUI_MGMT_SERVERS: Contains all GUI nodes

Each node on which the GUI services are started is added to these node classes.

For information about removing nodes from these node classes, see Removing nodes from management GUI-related node class.

For information about node classes, see Specifying nodes as input to GPFS commands in IBM Spectrum Scale: Administration and Programming Reference.

End of change