DB2 Version 10.1 for Linux, UNIX, and Windows

Setting up GPFS replication in a GDPC environment

When configuring a geographically dispersed DB2® pureScale® cluster (GDPC) environment, you must set up IBM® General Parallel File System ( GPFS™) replication.

Before you begin

Ensure that you already have the cluster installed and running. See Getting the cluster installed and running in a GDPC environment.

Procedure

  1. Prepare the sqllib_shared file system for replication.
    1. To enable replication, change the failure group of the non-replicated GPFS file system to 1. This would typically be the failure group on the first site.
    2. To permit that operation, restart GPFS and then ensure that the DB2 instance is stopped for each host, so the file system can be unmounted:
      root@hostA1:/> /home/db2inst1/sqllib/bin/db2cluster -cfs -start -all
      All specified hosts have been started successfully.
      
      db2inst1@hostA1:/home/db2inst1> db2stop instance on hostA1
      SQL1064N DB2STOP processing was successful.
      db2inst1@hostA2:/home/db2inst1> db2stop instance on hostA2
      SQL1064N DB2STOP processing was successful.
      db2inst1@hostA3:/home/db2inst1> db2stop instance on hostA3
      SQL1064N DB2STOP processing was successful.
      db2inst1@hostB1:/home/db2inst1> db2stop instance on hostB1
      SQL1064N DB2STOP processing was successful.
      db2inst1@hostB2:/home/db2inst1> db2stop instance on hostB2
      SQL1064N DB2STOP processing was successful.
      db2inst1@hostB3:/home/db2inst1> db2stop instance on hostB3
      SQL1064N DB2STOP processing was successful.
    3. To ensure the sqllib_shared file system is cleanly unmounted, the cluster is put in maintenance mode:
      root@hostA1:/> /home/db2inst1/sqllib/bin/db2cluster -cm -enter -maintenance -all
      Domain 'db2domain_20110224005525' has entered maintenance mode.
    4. Changing the failure group of the disk requires us to find out the Network Shared Disk (NSD) name that GPFS assigned to the disk. In the following sample output, the column ‘Device’ contains the actual device path and the column ‘Disk name’ contains the NSD name that GPFS assigned to that device.
      root@hostA1:/> /usr/lpp/mmfs/bin/mmlsnsd -m
      
      Disk name NSD volume ID Device Node name Remarks
      gpfs1nsd 091A33584D65F2F6 /dev/hdiskA1 hostA1.torolab.ibm.com
    5. Create a file /tmp/nsdAddFGroup.txt containing a line describing the disk, and which indicates it is part of failure group 1. This file should list all the NSD disks that belong to site A and that will belong to the db2fs1 file system. These disks will then be assigned to the first failure group. In this example, there is just one disk:
      root@hostA1:/> cat /tmp/nsdAddFGroup.txt
      gpfs1nsd:::dataAndMetadata:1
      
      root@hostA1:/> /home/db2inst1/sqllib/bin/db2cluster -cfs -list -filesystem
      File system NAME MOUNT_POINT
      --------------------------------- -------------------------
      db2fs1 /db2sd_20110224005651
      
      root@hostA1:/> /usr/lpp/mmfs/bin/mmlsdisk db2fs1 -L
      disk driver sector failure holds holds storage
      name type size group metadata data status availability disk ID pool remarks
      ------------ -------- ------ ------- -------- ----- ------------- 
      gpfs1nsd nsd 512 -1 yes yes ready up 1 system desc
      Number of quorum disks: 1
      Read quorum value: 1
      Write quorum value: 1
      
      root@hostA1:/> /usr/lpp/mmfs/bin/mmchdisk db2fs1 change -F /tmp/nsdAddFGroup.txt
      Verifying file system configuration information ...
      mmchdisk: Propagating the cluster configuration data to all
      affected nodes. This is an asynchronous process.
      
      root@hostA1:/> /usr/lpp/mmfs/bin/mmlsdisk db2fs1 -L
      disk driver sector failure holds holds storage
      name type size group metadata data status availability disk ID pool remarks
      ------------ -------- ------ ------- -------- ----- ------------- 
      gpfs1nsd nsd 512 1 yes yes ready up 1 system desc
      Number of quorum disks: 1
      Read quorum value: 1
      Write quorum value: 1
      Attention: Due to an earlier configuration change the file system
      is no longer properly replicated.

      Note that the disk gpfs1nsd is now assigned to failure group 1 (previously, it was -1)

    6. Change the replication settings for the file system to enable replication:
      root@hostA1:/> /usr/lpp/mmfs/bin/mmchfs db2fs1 -m 2 -r 2
      The desired replication factor exceeds the number of available metadata failure groups.
      Allowed, but files will be unreplicated and hence at risk.
      Attention: The desired replication factor exceeds the number of available data failure groups in 
      storage pool system.
      This is allowed, but files in this storage pool will not be replicated and will therefore be at risk.
    7. Verify that the file system settings have been changed to enable replication:
      root@hostA1:/> /usr/lpp/mmfs/bin/mmlsfs db2fs1
      flag value description
      ---- ---------------- ---------------------------------
      -f 32768 Minimum fragment size in bytes
      -i 512 Inode size in bytes
      -I 32768 Indirect block size in bytes
      -m 2 Default number of metadata replicas
      -M 2 Maximum number of metadata replicas
      -r 2 Default number of data replicas
      -R 2 Maximum number of data replicas
  2. Create an affinity between the network shared disk (NSD) and the hosts.
    Although some physical storage is local to each site, GPFS does not know which LUN is locally accessible (over the SAN) at each site. However, to indicate to GPFS that it should prefer going to local LUNs for read operations, providing better performance. Create a file /tmp/affinitizensd.txt to contain a line that indicates the disk is part of site A, and then use mmchnsd to create the affinity between the NSD and a site.
    root@hostA1:/> cat /tmp/affinitizensd.txt
    gpfs1nsd:hostA1,hostA2,hostA3
    Note that the previous step stopped the DB2 pureScale instance and placed the cluster into CM maintenance mode (as oppose to CFS maintenance mode) – this is necessary for the following steps as well. Verify the file system is not mounted. If it is mounted, then unmount it. Unmounting a filesystem can be done with the db2cluster -cfs -unmount -filesystem filesystem command.
    root@hostA1:/> /usr/lpp/mmfs/bin/mmlsmount db2fs1
    File system db2fs1 is not mounted.
    
    root@hostA1:/> /usr/lpp/mmfs/bin/mmchnsd -F /tmp/affinitizensd.txt
    mmchnsd: Processing disk gpfs1nsd
    mmchnsd: Propagating the cluster configuration data to all
    affected nodes. This is an asynchronous process.
    1. Verify that the site A computers (hostA*) have become the server hosts for the disk:
      root@hostA1:/> /usr/lpp/mmfs/bin/mmlsnsd -X
      
      Disk name NSD volume ID Device Devtype Node name Remarks
      gpfs1nsd 091A33584D65F2F6 /dev/hdiskA1 hdisk hostA1.torolab.ibm.com server node
      gpfs1nsd 091A33584D65F2F6 /dev/hdiskA1 hdisk hostA2.torolab.ibm.com server node
      gpfs1nsd 091A33584D65F2F6 /dev/hdiskA1 hdisk hostA3.torolab.ibm.com server node
    2. Restart the cluster:
      root@hostA1:/> /home/db2inst1/sqllib/bin/db2cluster -cm -exit -maintenance
      
      Host 'hostA1' has exited maintenance mode. Domain 'db2domain_20110224005525' has been started.
    3. Verify that the file system has been remounted, and then restart the instance on each computer:
      root@hostA1:/> /usr/lpp/mmfs/bin/mmlsmount db2fs1
      File system db2fs1 is mounted on 6 nodes.
      
      db2inst1@hostA1:/home/db2inst1> db2start instance on hostA1
      SQL1063N DB2START processing was successful.
      db2inst1@hostA2:/home/db2inst1> db2start instance on hostA2
      SQL1063N DB2START processing was successful.
      db2inst1@hostA3:/home/db2inst1> db2start instance on hostA3
      SQL1063N DB2START processing was successful.
      db2inst1@hostB1:/home/db2inst1> db2start instance on hostB1
      SQL1063N DB2START processing was successful.
      db2inst1@hostB2:/home/db2inst1> db2start instance on hostB2
      SQL1063N DB2START processing was successful.
      db2inst1@hostB3:/home/db2inst1> db2start instance on hostB3
      SQL1063N DB2START processing was successful.
    4. Verify with db2instance -list that the host resources are now online for all 6 computers:
      $ db2instance -list
      ID	  TYPE	           STATE		HOME_HOST		CURRENT_HOST		ALERT	PARTITION_NUMBER	LOGICAL_PORT	NETNAME
      --	  ----	           -----		---------		------------		-----	----------------	------------	-------
      0			MEMBER	         STOPPED		hostA1  		  hostA1			   NO	               0	           0		hostA1-ib0
      1			MEMBER	         STOPPED		hostA2  		  hostA2			   NO	               0	           0		hostA2-ib0
      2			MEMBER	         STOPPED		hostB1  		  hostB1			   NO	               0	           0		hostB1-ib0
      3			MEMBER	         STOPPED		hostB2  		  hostB2			   NO	               0	           0		hostB2-ib0
      128		CF	         	 	STOPPED		hostA3			  hostA3			   NO	               -	           0		hostA3-ib0
      129		CF	         	 	STOPPED		hostB3  		  hostB3			   NO	               -	           0		hostB3-ib0
      HOSTNAME		   STATE		INSTANCE_STOPPED	ALERT
      --------		   -----		----------------	-----
      hostA1  		  ACTIVE           NO     NO
      hostA2			  ACTIVE           NO     NO
      hostA3   		  ACTIVE           NO	   NO
      hostB1   		  ACTIVE           NO	   NO
      hostB2   		  ACTIVE           NO	   NO
      hostB3   		  ACTIVE           NO	   NO
  3. Add the replica disk from site B and the file system quorum disk from the tiebreaker site.

    Add the replica disk and file system quorum disk to the existing sqllib_shared file system. Note that information about the affinity of the LUNs is added to their local hosts.

    1. Create a file /tmp/nsdfailuregroup2.txt that describes the replica disk(s) at site B and /tmp/nsdfailuregroup3.txt that describes the tiebreaker disk on host T. In the following example hdiskB1 on Site B will hold the data replica for the sqllib_shared file system, while the hdiskC1 on host T will act as a quorum disk.
      root@hostA1:/> cat /tmp/nsdfailuregroup2.txt
      /dev/hdiskB1:hostB1,hostB2,hostB3::dataAndMetadata:2
      root@hostA1:/> /usr/lpp/mmfs/bin/mmcrnsd -F /tmp/nsdfailuregroup2.txt
      mmcrnsd: Processing disk hdiskB1
      mmcrnsd: Propagating the cluster configuration data to all
      affected nodes. This is an asynchronous process.
      
      root@T:/> cat /tmp/nsdfailuregroup3.txt
      /dev/hdiskC1:T::descOnly:3
      
      root@T:/> /usr/lpp/mmfs/bin/mmcrnsd -F /tmp/nsdfailuregroup3.txt
      mmcrnsd: Processing disk hdiskC1
      mmcrnsd: Propagating the cluster configuration data to all
      affected nodes. This is an asynchronous process.
    2. Verify that the NSDs have been created with the mmlsnsd command:
      root@hostA1:/> /usr/lpp/mmfs/bin/mmlsnsd –X
      Disk name   NSD volume ID    Device       Devtype              Node name
      ----------------------------------------------------------------------------------------
      gpfs1001nsd 091A336D4D674B1E /dev/hdiskB1 hdisk hostA1.torolab.ibm.com
      gpfs1001nsd 091A336D4D674B1E /dev/hdiskB1 hdisk hostA2.torolab.ibm.com
      gpfs1001nsd 091A336D4D674B1E /dev/hdiskB1 hdisk hostA3.torolab.ibm.com
      gpfs1001nsd 091A336D4D674B1E /dev/hdiskB1 hdisk hostB1.torolab.ibm.com server node
      gpfs1001nsd 091A336D4D674B1E /dev/hdiskB1 hdisk hostB2.torolab.ibm.com server node
      gpfs1001nsd 091A336D4D674B1E /dev/hdiskB1 hdisk hostB3.torolab.ibm.com server node
      gpfs1002nsd 091A33434D674B57 /dev/hdiskC1 hdisk T.torolab.ibm.com server node
      gpfs1nsd 091A33584D65F2F6 /dev/hdiskA1 hdisk hostA1.torolab.ibm.com server node
      gpfs1nsd 091A33584D65F2F6 /dev/hdiskA1 hdisk hostA2.torolab.ibm.com server node
      gpfs1nsd 091A33584D65F2F6 /dev/hdiskA1 hdisk hostA3.torolab.ibm.com server node
      gpfs1nsd 091A33584D65F2F6 /dev/hdiskA1 hdisk hostB1.torolab.ibm.com
      gpfs1nsd 091A33584D65F2F6 /dev/hdiskA1 hdisk hostB2.torolab.ibm.com
      gpfs1nsd 091A33584D65F2F6 /dev/hdiskA1 hdisk hostB3.torolab.ibm.com
    3. Add the disk at site B to a file system:
      root@hostA1:/> cat /tmp/nsdfailuregroup2.txt
      # /dev/hdiskB1:hostB1,hostB2,hostB3::dataAndMetadata:2
      gpfs1001nsd:::dataAndMetadata:2::
      
      root@hostA1:/> /usr/lpp/mmfs/bin/mmadddisk db2fs1 -F /tmp/nsdfailuregroup2.txt
      The following disks of db2fs1 will be formatted on node hostA1:
      gpfs1001nsd: size 34603008 KB
      Extending Allocation Map
      Checking Allocation Map for storage pool 'system'
      Completed adding disks to file system db2fs1.
      mmadddisk: Propagating the cluster configuration data to all
      affected nodes. This is an asynchronous process.
    4. Verify that the disk has been added to the file system with the correct failure group:
      root@hostA1:/> /usr/lpp/mmfs/bin/mmlsdisk db2fs1 –L
      
      disk driver sector failure holds holds storage
      name type size group metadata data status availability disk ID pool remarks
      ------------ -------- ------ ------- -------- ----- ------------- 
      gpfs1nsd nsd 512 1 yes yes ready up 1 system desc
      gpfs1001nsd nsd 512 2 yes yes ready up 2 system desc
      Number of quorum disks: 2
      Read quorum value: 2
      Write quorum value: 2
      Attention: Due to an earlier configuration change the file system
      is no longer properly replicated.
    5. Similarly, add the disk at the tiebreaker site to the file system:
      root@T:/> cat /tmp/nsdfailuregroup3.txt
      # /dev/hdiskC1:T::descOnly:3
      gpfs1002nsd:::descOnly:3::
      
      root@T:/> /usr/lpp/mmfs/bin/mmadddisk db2fs1 -F /tmp/nsdfailuregroup3.txt
      
      The following disks of db2fs1 will be formatted on node T:
      gpfs1002nsd: size 1048576 KB
      Extending Allocation Map
      Checking Allocation Map for storage pool 'system'
      Completed adding disks to file system db2fs1.
      mmadddisk: Propagating the cluster configuration data to all
      affected nodes. This is an asynchronous process.
    6. Verify that the disk has been added to the file system and to the correct failure group:
      root@T:/> /usr/lpp/mmfs/bin/mmlsdisk db2fs1 –L
      
      disk driver sector failure holds holds storage
      name type size group metadata data status availability disk ID pool remarks
      ------------ -------- ------ ------- -------- ----- ------------- 
      gpfs1nsd nsd 512 1 yes yes ready up 1 system desc
      gpfs1001nsd nsd 512 2 yes yes ready up 2 system desc
      gpfs1002nsd nsd 512 3 no no ready up 3 system desc
      Number of quorum disks: 3
      Read quorum value: 2
      Write quorum value: 2
      Attention: Due to an earlier configuration change the file system
      is no longer properly replicated.
  4. Rebalance the file system to replicate the data on the newly added disks.
    root@hostA1:/> /usr/lpp/mmfs/bin/mmrestripefs db2fs1 -R
    Verify that the message about the file system not being replicated is gone:
    root@hostA1:/> /usr/lpp/mmfs/bin/mmlsdisk db2fs1 -L
    disk driver sector failure holds holds storage
    name type size group metadata data status availability disk ID pool remarks
    ------------ -------- ------ ------- -------- ----- ------------- 
    gpfs1nsd nsd 512 1 yes yes ready up 1 system desc
    gpfs1001nsd nsd 512 2 yes yes ready up 2 system desc
    gpfs1002nsd nsd 512 3 no no ready up 3 system desc
    Number of quorum disks: 3
    Read quorum value: 2
    Write quorum value: 2
    At the end of this step, the following is set up:
    • A GPFS and RSCT cluster across sites A, B and C
    • A tie-breaker host T that is part of the RSCT domain and GPFS cluster but is not part of the DB2 instance.
    • A DB2 pureScale cluster spanning sites A and B, with the instance shared metadata sqllib_shared file system being a replicated GPFS file system across sites A and B.

    In the example above, the data in sqllib_shared is stored on both /dev/hdiskA1 and /dev/hdiskB1. They are in separate replicated failure groups, so any data stored on /dev/hdiskA1 is replicated on /dev/hdiskB1. The file descriptor quorum for sqllib_shared is handled through /dev/hdiskC1.

  5. Create NSDs for the disks to be used for the log file system.

    At this point, storage replication is configured for sqllib_shared, but it needs to be configured for the database and transaction logs. Next, create NSDs using the disks for logfs, ensuring they are assigned to the correct failure groups.

    1. Create a file /tmp/nsdForLogfs1.txt.
      root@hostA1:/> cat /tmp/nsdForLogfs1.txt
      /dev/hdiskA2:hostA1,hostA2,hostA3::dataAndMetadata:1
      /dev/hdiskB2:hostB1,hostB2,hostB3::dataAndMetadata:2
      /dev/hdiskC2:T::descOnly:3
      
      root@hostA1:/> /usr/lpp/mmfs/bin/mmcrnsd -F /tmp/nsdForLogfs1.txt
      mmcrnsd: Processing disk hdiskA2
      mmcrnsd: Processing disk hdiskB2
      mmcrnsd: Processing disk hdiskC2
      mmcrnsd: Propagating the cluster configuration data to all
      affected nodes. This is an asynchronous process.
    2. Verify that the NSDs have been created:
      root@hostA1:/> /usr/lpp/mmfs/bin/mmlsnsd -X
      
      gpfs1004nsd 091A33584D675EDA /dev/hdiskA2 hdisk hostA1.torolab.ibm.com server node
      gpfs1004nsd 091A33584D675EDA /dev/hdiskA2 hdisk hostA2.torolab.ibm.com server node
      gpfs1004nsd 091A33584D675EDA /dev/hdiskA2 hdisk hostA3.torolab.ibm.com server node
      gpfs1004nsd 091A33584D675EDA /dev/hdiskA2 hdisk hostB1.torolab.ibm.com
      gpfs1004nsd 091A33584D675EDA /dev/hdiskA2 hdisk hostB2.torolab.ibm.com
      gpfs1004nsd 091A33584D675EDA /dev/hdiskA2 hdisk hostB3.torolab.ibm.com
      gpfs1005nsd 091A336D4D675EDC /dev/hdiskB2 hdisk hostA1.torolab.ibm.com
      gpfs1005nsd 091A336D4D675EDC /dev/hdiskB2 hdisk hostA2.torolab.ibm.com
      gpfs1005nsd 091A336D4D675EDC /dev/hdiskB2 hdisk hostA3.torolab.ibm.com
      gpfs1005nsd 091A336D4D675EDC /dev/hdiskB2 hdisk hostB1.torolab.ibm.com server node
      gpfs1005nsd 091A336D4D675EDC /dev/hdiskB2 hdisk hostB2.torolab.ibm.com server node
      gpfs1005nsd 091A336D4D675EDC /dev/hdiskB2 hdisk hostB3.torolab.ibm.com server node
      gpfs1006nsd 091A33434D675EE0 /dev/hdiskC2 hdisk T.torolab.ibm.com server node
  6. Create the replicated logfs system.

    In Step 5, GPFS rewrites /tmp/nsdForLogfs1.txt to include the nsd names, instead of the hdisknames. This is done by commenting out the entries made in /tmp/nsdForLogfs1.txt and adding entries required for creating the filesystem. Once the GPFS rewrites the file, it will read as follows:

    root@hostA1:/> cat /tmp/nsdForLogfs1.txt
    # /dev/hdiskA2:hostA1,hostA2,hostA3::dataAndMetadata:1
    gpfs1004nsd:::dataAndMetadata:1::
    # /dev/hdiskB2:hostB1,hostB2,hostB3::dataAndMetadata:2
    gpfs1005nsd:::dataAndMetadata:2::
    # /dev/hdiskC2:T::descOnly:3
    gpfs1006nsd:::descOnly:3::
    1. Create the logfs file system, containing 2 replicas, a disk block size of 1MB, maximum of 255 nodes, and mounted under /logfs:
      root@hostA1:/> /usr/lpp/mmfs/bin/mmcrfs logfs -F /tmp/nsdForLogfs1.txt -m 2 -M 2 -r 2 -R 2 -B 1M -n 255 -T /logfs
      
      The following disks of logfs will be formatted on node hostB2:
      gpfs1004nsd: size 438304768 KB
      gpfs1005nsd: size 34603008 KB
      gpfs1006nsd: size 57344 KB
      Formatting file system ...
      Disks up to size 6.7 TB can be added to storage pool 'system'.
      Creating Inode File
      Creating Allocation Maps
      Clearing Inode Allocation Map
      Clearing Block Allocation Map
      Formatting Allocation Map for storage pool 'system'
      Completed creation of file system /dev/logfs.
      mmcrfs: Propagating the cluster configuration data to all
      affected nodes. This is an asynchronous process.
    2. Verify that the file system has been created with the disks in the proper failure groups:
      root@hostA1:/> /usr/lpp/mmfs/bin/mmlsdisk logfs –L
      
      disk driver sector failure holds holds storage
      name type size group metadata data status availability disk ID pool remarks
      ------------ -------- ------ ------- -------- ----- ------------- 
      gpfs1004nsd nsd 512 1 yes yes ready up 1 system desc
      gpfs1005nsd nsd 512 2 yes yes ready up 2 system desc
      gpfs1006nsd nsd 512 3 no no ready up 3 system desc
      Number of quorum disks: 3
      Read quorum value: 2
      Write quorum value: 2
  7. Create NSDs for datafs, and create the datafs file system.

    Create the NSDs for the database container file systems.

    1. For this file system, use 5 disks at each of the two main sites, as well as one disk for file system quorum from host T:
      root@hostA1:/> cat /tmp/nsdForDatafs.txt
      /dev/hdiskA3:hostA1,hostA2,hostA3::dataAndMetadata:1
      /dev/hdiskA4:hostA1,hostA2,hostA3::dataAndMetadata:1
      /dev/hdiskA5:hostA1,hostA2,hostA3::dataAndMetadata:1
      /dev/hdiskA6:hostA1,hostA2,hostA3::dataAndMetadata:1
      /dev/hdiskA7:hostA1,hostA2,hostA3::dataAndMetadata:1
      /dev/hdiskB3:hostB1,hostB2,hostB3::dataAndMetadata:2
      /dev/hdiskB4:hostB1,hostB2,hostB3::dataAndMetadata:2
      /dev/hdiskB5:hostB1,hostB2,hostB3::dataAndMetadata:2
      /dev/hdiskB6:hostB1,hostB2,hostB3::dataAndMetadata:2
      /dev/hdiskB7:hostB1,hostB2,hostB3::dataAndMetadata:2
      /dev/hdiskC3:T::descOnly:3
      
      root@hostA1:/> /usr/lpp/mmfs/bin/mmcrnsd -F /tmp/nsdForDatafs.txt
      mmcrnsd: Processing disk hdiskA3
      mmcrnsd: Processing disk hdiskA4
      mmcrnsd: Processing disk hdiskA5
      mmcrnsd: Processing disk hdiskA6
      mmcrnsd: Processing disk hdiskA7
      mmcrnsd: Processing disk hdiskB3
      mmcrnsd: Processing disk hdiskB4
      mmcrnsd: Processing disk hdiskB5
      mmcrnsd: Processing disk hdiskB6
      mmcrnsd: Processing disk hdiskB7
      mmcrnsd: Processing disk hdiskC3
      mmcrnsd: Propagating the cluster configuration data to all
      affected nodes. This is an asynchronous process.
    2. Create the datafs file system, containing two replicas, a disk block size of 1 MB, maximum of 255 nodes, and mounted under /datafs:
      root@hostA1:/> /usr/lpp/mmfs/bin/mmcrfs datafs -F /tmp/nsdForDatafs.txt -m 2 -M 2 -r 2 -R 2 -B 1M -n 
      255 -T /datafs
      
      The following disks of datafs will be formatted on node hostA3:
      gpfs1016nsd: size 438304768 KB
      gpfs1017nsd: size 438304768 KB
      gpfs1018nsd: size 438304768 KB
      gpfs1019nsd: size 1462220800 KB
      gpfs1020nsd: size 1462220800 KB
      gpfs1021nsd: size 157286400 KB
      gpfs1022nsd: size 157286400 KB
      gpfs1023nsd: size 157286400 KB
      gpfs1024nsd: size 157286400 KB
      gpfs1025nsd: size 157286400 KB
      gpfs1026nsd: size 57344 KB
      Formatting file system ...
      Disks up to size 18 TB can be added to storage pool 'system'.
      Creating Inode File
      Creating Allocation Maps
      Clearing Inode Allocation Map
      Clearing Block Allocation Map
      Formatting Allocation Map for storage pool 'system'
      Completed creation of file system /dev/datafs.
      mmcrfs: Propagating the cluster configuration data to all
      affected nodes. This is an asynchronous process.
  8. Mount log file systems and data file systems.
    root@hostA1:/> /usr/lpp/mmfs/bin/mmlsmount logfs
    File system logfs is not mounted.
    
    root@hostA1:/> /home/db2inst1/sqllib/bin/db2cluster -cfs -mount -filesystem logfs
    File system 'logfs' was successfully mounted.
    
    root@hostA1:/> /usr/lpp/mmfs/bin/mmlsmount logfs
    File system logfs is mounted on 7 nodes.
    
    root@hostA1:/> /home/db2inst1/sqllib/bin/db2cluster -cfs -mount -filesystem datafs
    File system 'datafs' was successfully mounted.
  9. Complete the affinitization of Reads.
    As root, complete the affinitization of reads to local hosts by issuing the following command:
    root@hostA1:/> mmchconfig readReplicaPolicy=local
    	mmchconfig: Command successfully completed
    	mmchconfig: Propagating the cluster configuration data to all
    	  affected nodes.  This is an asynchronous process.

What to do next

Once you have GPFS replication is setup, you can create the database.