Installing a Tivoli SA MP fix pack on an IBM Smart Analytics System or on a Balanced Warehouse system

Technote (FAQ)


Question

How do I manually install an IBM Tivoli Systems Automation for Multiplaforms (Tivoli SA MP) fix pack on an IBM Smart Analytics System or on a Balanced Warehouse system?

Answer

If your system is installed with the IBM Smart Analytics System Control Console software, use an IBM Smart Analytics System fix pack available on Fix Central to update your software stack. If the Tivoli SA MP fix pack you need is not available in an IBM Smart Analytics System fix pack or your system does not have the control console software installed, use the instructions in this document to manually install a Tivoli SA MP fix pack on your IBM Smart Analytics System.
Before you begin
This document uses the following variables. Substitute the appropriate values for your environment.

Variable name Example name to use Notes
MGMT, ADMIN, ADMIN_ALL, DATA, STANDBY These variables refer to the role of the different servers. Replace each variable with the appropriate host names from your environment.
  • MGMT = management node used to manage all nodes in the cluster.
  • ADMIN = administration node or NODE0000 or primary coordinator node.
  • ADMIN_ALL = all coordinator nodes
  • DATA = all data nodes in the core warehouse
  • STANDBY = all standby nodes the core warehouse
BCU_share
  • All AIX systems: /BCU_share
  • 5600 systems: /BCU_share
  • D5000 and D5100: /csminstall
new_path_to_fix_pack TSAMP_3.1.0.9 Use a name that clearly labels the directory with the product, version, and build number (if you are applying a special build). If you need to install the product in the future, this naming convention will allow you to easily identify the product and correct version to install.

Note: You can use any utility you prefer to issue remote commands to the servers in the cluster. If you choose not to use a remote command utility, you can log on to each server and run the commands locally.

Overview
The following is an overview of the steps required to complete the upgrade and migration of RSCT and Tivoli SA MP manually on an IBM Smart Analytics System.
    1. Mount the installation media directory from the management node
    2. Download the necessary files
    3. Take a backup
    4. Stop the resources
    5. Disable critical resource protection method and enable manual mode
    6. Back up the resource model
    7. Stop the peer domain
    8. Remove RSCT efixes
    9. Install the Tivoli SA MP fix pack
    10. Start the domain
    11. Complete the migrations
    12. Back up the resource model
    13. Enable the critical resource protection and high availability
    14. Start the resources

Procedure
1. Mount the installation media directory from the management node

    1.1. If the installation media directory is not already mounted from the management node, mount it using the command that is appropriate for your system.
      • For IBM Smart Analytics System 7600 or 7700, Balanced Warehouse E7100, or IBM Smart Analytics System 5600 environments, issue the following command as root on ADMIN_ALL, DATA, and STANDBY nodes:

          mount /BCU_share
      • For Balanced Warehouse D5000 or D5100 environments, issue the following command as root on ADMIN_ALL, DATA, and STANDBY nodes:

          mount /csminstall

      Note: In newer configurations, the /BCU_share directory has replaced the /csminstall directory. If you mounted the /csminstall directory, substitute /csminstall for /BCU_share as you follow the instructions in this document. Alternatively, you can create a symbolic link.

    1.2. For D5000 and D5100 environments only, you can optionally create a symbolic link. Issue the following command as root on MGMT, ADMIN_ALL, DATA, and STANDBY nodes:

      ln -s /csminstall /BCU_share

    1.3. Create subdirectories in the /BCU_share directory on the management node to hold a copy of the downloaded files. You can name the directories anything that helps you remember what is stored there. These directories will act as a repository to hold installation images in case they are needed in the future. In the remainder of this document, the new directory for the Tivoli SA MP fix pack is referenced as the < new_path_to_fix_pack> variable.

2. Download the necessary files

    2.1. Download the appropriate version of the installation media. If you are installing a generally available Tivoli SA MP fix pack, you can download the fix pack from the following URL:
    2.2. Download the appropriate Tivoli SA MP license file. Obtain the license file through your Passport Advantage account by downloading the DB2 activation zip file for your DB2 server type.

    For example, the activation zip file for IBM DB2 Enterprise Server Edition V9.5 is called “DB2_ESE_Auth_User_Activation_V95.zip”.

    The activation zip file contains the “samXX.lic” files in the subdirectory "../ese_u/db2/license/".
      • For v3.2.x.x, the license file is called “sam32.lic”
      • For v3.1.x.x, the license file is called “sam31.lic”

    2.3. Copy the files to the subdirectory you created in the /BCU_share directory in step 1.3 and copy the license into the "license" directory. Copy the installation media to the directory specified by the <new_path_to_fix_pack> variable and the license file to the <new_path_to_fix_pack>/ <SAM_license>/license variable and path.

      2.3.1. Extract the files you downloaded. Run the following commands on the MGMT node:
        • uncompress <filename> or gunzip <filename>
        • tar xvf <filename>.tar

        A subdirectory will be created with the name "SAM< version>MP< OS>", where < OS> is either the AIX or Linux operating system, and <version> is one of the following Tivoli SA MP versions:
          • On AIX 6.1, Tivoli SA MP v3.1 Fixpack 8, the subdirectory is called, "SAM3108MPAIX".

              Note: Tivoli SA MP v3.1 Fixpack 8 is shown as 3.1.5.8 using the lslpp command.
          • On Linux, Tivoli SA MP v3.1 Fixpack 8, the subdirectory is called, "SAM3108MPLinux".

      2.3.2. Extract the samXX.lic file. For the sample “DB2_ESE_Auth_User_Activation_V95.zip” activation zip file for IBM DB2 Enterprise Server Edition V9.5, the samXX.lic files are contained in the subdirectory " ../ese_u/db2/license/".
        • Copy the license file into the /<new_path_to_fix_pack>/<SAM<version>MP<OS>/license directory. If you're using Tivoli SA MP v3.1.x.x, copy “sam31.lic” into the “license” subdirectory of the extracted v3.1.0.8 fix pack install media:
              / <new_path_to_fix_pack> /SAM3108MPAIX/license

3. Take a backup
    Verify that you have a good backup image of the operating system using your normal backup procedures.

4. Stop the resources
    4.1. As root on the ADMIN node, run
      hastopdb2

    4.2. Verify that the database partition resources are offline using the " hals" command or the " lssam" command. Note: It might take a few minutes before the resources are stopped.

    4.3. Unmount the shared /db2home file system.
      • On AIX, run this command only if the system is a 7600. It is not necessary to unmount this shared file system on a 7700 system.

          mmumount all –a
      • On Linux, run the following command:

          hastopnfs

    4.4. Verify that the database partition resources and the NFS file system resources are offline using the " hals " command or the " lssam" command. Note: It may take a few minutes to stop the resources. If necessary, use the command " fuser -k /db2home" to disconnect users that are accessing the shared file system.

5. Disable critical resource protection method and enable manual mode
    Disable the RSCT software to prevent an automated reboot from occurring.

    5.1. Change the critical resource protection method so that the RSCT software will not reboot a node if a communication issue affects the cluster. Issue the following command as root on the ADMIN node:
      chrsrc -c IBM.PeerNode CritRsrcProtMethod=5

    5.2. Verify the that the critical resource protection method has changed:
      lsrsrc -c IBM.PeerNode CritRsrcProtMethod

    5.3. Change Tivoli to manual mode
      samctrl -M T

    5.4. Verify you are in manual mode, run
      lssamctrl

6. Back up the resource model
    6.1. Rename previously captured backup.
      If you have a backup from a previous time, rename the files for safe keeping if not already done.
        tar -cvf - /var/ct.backup | gzip > ct.backup_<label>_<date.time>_$(hostname).tgz

        For example
          tar -cvf - /var/ct.backup | gzip > ct.backup_before_upgrade_2014.01.14_bcuadmin01.tgz

    6.2. Backup the resource model. As root on each ADMIN_ALL, DATA, and STANDBY node, issue the following command:
      /usr/sbin/rsct/bin/ctbackup

        Note: You must specify the full path to the command.

    6.3. Verify that a new directory was created on each node.
      ls -l /var | grep ct.backup

    6.4. Rename the captured backup for safe keeping.
      As root on each ADMIN_ALL, DATA, and STANDBY nodes, issue the following command:
        tar -cvf - /var/ct.backup | gzip > ct.backup_<label>_<date.time>_$(hostname).tgz

7. Stop the peer domain
    7.1. Stop the domain using the following command:
      stoprpdomain bcudomain

        If the domain does not stop, you can force it offline by running the " stoprpdomain -f bcudomain" command.

    7.2. Verify that the domain is offline:
      lsrpdomain

    7.3. Wait until the operational state of the domain (OpState) displays a status of "Offline" in the lsrpdomain command output before you proceed to the next step.

8. Remove RSCT efixes
    Remove any RSCT efix that was previously applied.

    8.1. List filesets with efixes
      Get a listing of all locked filesets and the locking EFIX label. As root on all nodes, run
         /usr/sbin/emgr -P

    8.2 Remove filesets with efixes
      Remove the EFIX filesets identified in previous step. As root on all nodes, run
        /usr/sbin/emgr -r -L <EFIX label>

9. Install the Tivoli SA MP fix pack
    Installing the fix pack updates both the RSCT software and the Tivoli SA MP software. You must migrate each component to complete the installation.

    9.1. Log in as root on ADMIN_ALL, DATA, and STANDBY nodes.

    9.2. Set the CT_MANAGEMENT_SCOPE environment variable to the value "2" by running the following command:
      export CT_MANAGEMENT_SCOPE=2

    9.3. Install the fix pack on ADMIN_ALL, DATA, and STANDBY nodes by issuing the following command on each ADMIN_ALL, DATA, and STANDBY node:
      / <new_path_to_fix_pack> /SAM <version> MP <OS> /installSAM

10. Start the domain
    10.1. Start the domain by running the following command:
      startrpdomain bcudomain

    10.2. Issue the lsrpdomain command and the lsrpnode command to verify that all nodes that have restarted in the domain are online
      • Sample output for the lsrpdomain command:

            Name OpState RSCTActiveVersion MixedVersions TSPort GSPort
            bcudom Online 2.4.7.1   Yes 12347  12348
      • Sample out for the lsrpnode command:

            Name OpState RSCTVersion
            node01 Online 2.4.11.6
            node02 Online 2.4.11.6
            node03 Online 2.4.11.6

          The MixedVersions value should be " Yes". If the MixedVersions value is not "Yes", wait a few minutes and rerun the lsrpdomain command until the MixedVersions value is "Yes".

    10.3. Ensure TSA has completed resource validation
      As root on the ADMIN node, run
        lssrc -ls IBM.RecoveryRM | grep Config

          If the value of "In Config State" is true then resource validation has completed. If the value is false then resource validation is still in process. Wait for the resource validation to complete.

    10.4. Run the lssam command and verify that the following checkpoint conditions are satisfied in the lssam output.
      All resource groups contain the following:
      • No red is displayed
      • All "Pending" states are cleared
      • All "Failed offline" states are cleared
      • All nominal states are "Offline"
      • There are no "Sacrificed" states
      • There are no "Unknown" states

      If any resources in the lssam output are in a ″Failed Offline″, "Sacrificed", or "Pending Offline" state, verify that the associated hardware is available and functioning correctly, and then reset the resource.

      To reset all resources using the HA Management Toolkit, issue the following command:
          hareset

            This command will take all resources offline and then attempt to reset the resources. If you specify the command with the "nooffline" argument, it will not take the resources offline before attempting to reset them.

        To reset a resource using the resetrsrc command:
            resetrsrc –s 'Name = "ResourceName"' ResourceClass

              Use the following sample command as a model:
              resetrsrc -s "Name = 'db2mnt-db2path_bcuaix_NODE0004-rs'" IBM.Application  

            If the hardware is available, this command will reset the ″Failed Offline″ state of the resource to ″Offline″.  

11. Complete the migrations
    11.1. Login as root on the ADMIN node.

    11.2. Set your environment by running the following command:
      export CT_MANAGEMENT_SCOPE=2

    11.3. Complete the migration for RSCT:
      runact -c IBM.PeerDomain CompleteMigration Options=0

    11.4 Verify that you receive the following success message before proceeding:
      "Resource Class Action Response for CompleteMigration"

    11.5. Verify the migration is complete and that the MixedVersions value is "No":
      lsrpdomain

    11.6. Complete the migration for Tivoli SA MP:
      samctrl -m

        At the prompt to continue, enter "Y".

    11.7. Verify that the Tivoli SA MP update completed successfully.
      Check that the Active Version Number (AVN) matches the Installed Version Number (IVN) for Tivoli SA MP by running the following command:
        lssrc –ls IBM.RecoveryRM |grep VN

12. Back up the resource model

    12.1. Backup the resource model using the commands in step 6.

13. Enable the critical resource protection and high availability
    13.1. Change the critical resource protection method to the default value. Issue the following command as root on the ADMIN node:
      chrsrc -c IBM.PeerNode CritRsrcProtMethod=3

    13.2. To verify the change completed successfully:
      lsrsrc -c IBM.PeerNode CritRsrcProtMethod

    13.3 Change to automatic mode
      samctrl -M F

    13.4 Verify you are in automatic mode
      lssamctrl

14. Start the resources
    14.1. Mount the shared /db2home file system (if needed)
      • On AIX, as root on the ADMIN node, issue the following command:

          mmmount all –a
      • On Linux, issue the following command as root on each node:

          hastartnfs

    14.2. On Linux, verify that the NFS resources are online by running either the hals command or the lssam command.

    14.3. Start the database partition resources. Issue the following command as root on the ADMIN node:
      hastartdb2

    14.4. Verify that the database partition resources are online. Note: It might take a few minutes for the resources to come online.
      Run either the hals command or the lssam command to verify the status of the resources.

Related information

Download Tivoli SA MP fix packs
Fix Central

Cross reference information
Segment Product Component Platform Version Edition
Information Management InfoSphere Balanced Warehouse Balanced Warehouse AIX, Linux 9.5, 9.1, 9.7

Rate this page:

(0 users)Average rating

Add comments

Document information


More support for:

IBM Smart Analytics System

Software version:

9.7

Operating system(s):

AIX 6.1, Linux

Reference #:

1588112

Modified date:

2014-02-13

Translate my page

Machine Translation

Content navigation