Fix Readme
Abstract
Downloads for Workload Partition Manager for AIX
Content
IBM PowerVM Virtual I/O Server
Contents
This Readme contains installation and other information about VIOS Update Release 2.2.2.1
Installation information
Pre-installation information and instructions
NOTE: Minimum disc space requirement has changed for Release 2.2.0 and beyond.
Please ensure that your rootvg contains at least 30GB before you attempt to upgrade to Update Release 2.2.2.1.
Example:
"lspv -size hdisk0", and then insure the output is 30000 or greater.
If you are planning to update a version of VIOS lower than 2.1, you must first migrate your VIOS to VIOS version 2.1.0 using the Migration DVD. After the VIOS is at version 2.1.0, the Update Release can be applied to bring the VIOS to the latest level, 2.2.2.1.
After the VIOS migration is complete, from 1.X to 2.X, you must set the Processor Folding, described here under Migration DVD:
Virtual I/O Server support for Power Systems
While the above process is the most straightforward for users, you should note that with this Update Release version 2.2.2.1, a single boot alternative to this multiple step process is available to NIM users. NIM users can update by creating a single, merged lpp_source that combines the contents of the Migration DVD with the contents of this Update Release 2.2.2.1.
A single, merged lpp_source is not supported for a VIOS that uses SDDPCM. However, if you use SDDPCM, you can still enable a single boot update by using the alternate method described at the following location:
SDD and SDDPCM migration procedures when migrating VIOS from version 1.x to version 2.x
Checking for a loaded media repository
To check for a loaded media repository, and then unload it, follow these steps.
- To check for loaded images, run the following command:
& lsvopt
The Media column lists any loaded media. - To unload media images. run the following commands on all VTDs that have loaded images.
& unloadopt -vtd <file-backed_virtual_optical_device > - To verify that all media are unloaded, run the following command again.
& lsvopt
The command output should show No Media for all VTDs.
Migrating Shared Storage Pool configuration
If your current VIOS is configured with Shared Storage Pool, from 2.2.1.4 or later, the following information applies.
The rolling updates enhancement allows you to apply Update Release 2.2.2.1 to the VIOS logical partitions in the cluster individually without causing an outage in the entire cluster. The updated logical partitions cannot use the new capabilities until all logical partitions in the cluster are updated and the cluster is upgraded to use the new capabilities.
To upgrade the VIOS logical partitions to use the new capabilities, ensure that the following conditions are met:
- All VIOS logical partitions must have VIOS Update Release version 2.2.1.4 or later installed. After the update, you can verify that the logical partitions have the new level of software installed by entering the cluster -status -verbose command from the VIOS command line. In the Node Upgrade Status field, if the status of the VIOS logical partition is displayed as UP_LEVEL, the software level in the logical partition is higher than the software level in the cluster. If the status is displayed as ON_LEVEL, the software level in the logical partition and the cluster is the same.
- All VIOS logical partitions must be running. If any VIOS logical partition in the cluster is not running, the cluster cannot be upgraded to use the new capabilities.
The VIOS SSP software monitors node status and automatically upgrades the cluster to make use of the new capabilities when all the nodes in the cluster have been updated to support those capabilities.
If your current VIOS is configured with Shared Storage Pool from 2.2.1.1 or 2.2.1.3, the following information applies.
A cluster that is created and configured on VIOS Version 2.2.1.1 or 2.2.1.3 must be migrated to Version 2.2.1.4 or 2.2.1.5 before you can use rolling updates. VIOS Version 2.2.1.3 can be migrated to version VIOS Version 2.2.1.4. This migration allows you to keep your shared storage pool devices. If this procedure is not followed and the cluster is running, the update may fail unpredictably.
Note: If you choose to destroy the devices supported by the storage pool, you can delete the cluster and skip these steps. However, you will not be able to recover client data, except from existing backups taken from client partitions.
Follow these steps:
- Close all devices that are mapped to the shared storage pool, which may entail shutting down clients.
- On a VIOS with Version 2.2.1.1 or 2.2.1.3, create a backup of the old version cluster as User padmin.
$ viosbr -backup -file oldCluster -clustername clusterA - Save the generated backup file, oldCluster.clusterA.tar.gz, on different system.
- List devices and note the storage with the volume group name caavg_private . This volume group is the repository disk.
- Run chkdev on this device and note the IDENTIFIER, so that it can be found after reboot. (The device name may change and the IDENTIFIER field can be used to verify that the name is preserved or to find the repository disk.)
Example:
$ chkdev -dev hdisk1
NAME: hdisk1
IDENTIFIER: 200B75Y4191009107210790003IBMfcp
PHYS2VIRT_CAPABLE: YES
VIRT2NPIV_CAPABLE: NA
VIRT2PHYS_CAPABLE: NA - List devices by using the lscluster -d command to determine devices that are part of the Shared Storage Pool
Example:
lscluster -d | moreStorage Interface Query Cluster Name: cluster_hina Cluster uuid: 442b3de4-10cb-11e1-9dd1-e41f13205e94 Number of nodes reporting = 2 Number of nodes expected = 2 Node hinav1.austin.ibm.com Node uuid = 43e8fa92-10cb-11e1-9dd1-e41f13205e94 Number of disk discovered = 3 hdisk9 state : UP uDid : 200B75Y4191165907210790003IBMfcp uUid : 1fdda09f-e2da-cba6-ad4f-dc8e02ab860a type : CLUSDISK hdisk8 state : UP uDid : 200B75Y4191164907210790003IBMfcp uUid : 4d51499f-1721-98f9-ec3c-5a669f7b2cec type : CLUSDISK hdisk1 state : UP uDid : uUid : 8743c7fa-a85f-11d4-833c-fd67f8eba2d9 type : REPDISK
- Run chkdev on the devices identified in the previous step and note the IDENTIFIER. These devices have your client data and should not be reallocated after the install.
- Update the VIOS system with Version 2.2.1.4 or later.
Note: The physical volumes used for the storage pool should remain the same and their contents should not to be altered. - After the installation of VIOS Version 2.2.1.4, or later, is completed, convert the backup file created in Step 2 to the new format. This action generates a migrated backup file, in gz format.
Example: oldCluster_MIGRATED.clusterA.tar.gz
$ viosbr -migrate -file oldCluster.clusterA.tar.gz - Clean the physical volume associated with the repository disk that has the matching IDENTIFIER from Step 5. If the device name no longer exists or is in the defined state, you can use the chkdev command to identify the correct physical volume.
Example:
$ chkdev -dev hdisk9
NAME: hdisk9
IDENTIFIER: 200B75Y4191009107210790003IBMfcp
PHYS2VIRT_CAPABLE: YES
VIRT2NPIV_CAPABLE: NA
VIRT2PHYS_CAPABLE: NA
Run the cleandisk command on the device found from Step 11, in this example hdisk9. Do not run this command on other devices.
$ cleandisk -r hdisk9 - Restore the network devices by using the migrated backup file from Step 9, specifying the physical volume from Step 10, in this example, hdisk9.
$ viosbr -restore -file oldCluster_MIGRATED.clusterA.tar.gz -clustername clusterA -repopvs hdisk9 -type net - Restore the cluster by using the migrated backup file from Step 9, specifying the physical volume from Step 10, in this example, hdisk9.
$ viosbr -restore -file oldCluster_MIGRATED.clusterA.tar.gz -clustername clusterA -repopvs hdisk9
After a successful restore, cluster and all Shared Storage Pool mappings are configured as earlier. - To verify that the above cluster restored successfully, list the nodes in the cluster.
$ cluster -listnode -clustername clusterA - List the storage mappings on the VIOS.
$ lsmap -all
Installing the Update Release
There is now a method to verify the VIOS update files before installation. This process requires access to openssl by the 'padmin' User, which can be accomplished by creating a link. To verify the VIOS update files, follow these steps:
$ oem_setup_env
Create a link to openssl
# ln -s /usr/bin/openssl /usr/ios/utils/openssl
Verify the link to openssl was created
# ls -al /usr/ios/utils
# exit
Use one of the following methods to install the latest VIOS Service Release. As with all maintenance, you should create a VIOS backup before making changes.
If you are running a Shared Storage Pool configuration, you must follow the steps in "Migrate Shared Storage Pool Configuration".
While running 'updateios' in the following steps, you may see accessauth messages, but these messages can safely be ignored.
Applying updates from a local hard disk
To apply the updates from a directory on your local hard disk, follow these steps.
The current level of the VIOS must be 2.1.0 or later
- Log in to the VIOS as the user padmin.
- If you use one or more File Backed Optical Media Repositories, you need to unload media images before you apply the Update Release. See details here..
- If you use Shared Storage Pools, then Shared Storage Pool Services must be stopped.
# clstartstop -stop -n <cluster_name> -m <hostname> - Create a directory on the Virtual I/O Server.
$ mkdir <directory_name> - Using ftp, transfer the update file(s) to the directory you created.
- Commit previous updates by running the updateios command
$ updateios -commit - Verify the updates files that were copied, this step can only be preformed, if the link to openssl was created
$ chmod 777 <directory_path>/ck_sum.bff
$ cp <directory_path>/ck_sum.bff /home/padmin
$ ck_sum.bff <directory_path>
If there are missing updates or incomplete downloads, an error message is displayed. - Apply the update by running the updateios command
$ updateios -accept -install -dev <directory_name> - Run the following command to set authorization for padmin.
$ swrole - PAdmin - To load all changes, reboot the VIOS as user padmin .
$ shutdown -restart - If cluster services were stopped in step 3, restart cluster services.
$ clstartstop -start -n <cluster_name> -m <hostname> - Verify that the update was successful by checking the results of the updateios command and by running the isolevel command, which should indicate that the ioslevel is now 2.2.2.1.
$ioslevel
Applying updates from a remotely mounted file system
If the remote file system is to be mounted read-only, follow one of these steps.
The current level of the VIOS must be 2.1.0 or later
- Log in to the VIOS as the user padmin.
- If you use one or more File Backed Optical Media Repositories, you need to unload media images before you apply the Update Release. See details here..
- If you use Shared Storage Pools, then Shared Storage Pool Services must be stopped.
# clstartstop -stop -n <cluster_name> -m <hostname> - Mount the remote directory onto the Virtual I/O Server.
$ mount remote_machine_name:directory /mnt - Commit previous updates, by running the updateios command
$ updateios -commit - Verify the updates files that were copied, this step can only be preformed, if the link to openssl was created
$ chmod 777 <directory_path>/ck_sum.bff
$ cp <directory_path>/ck_sum.bff /home/padmin
$ ck_sum.bff <directory_path>
If there are missing updates or incomplete downloads, an error message is displayed. - Apply the update by running the updateios command
$ updateios -accept -install -dev /mnt - Run the following command to set authorization for padmin.
$ swrole - PAdmin - To load all changes, reboot the VIOS as user padmin .
$ shutdown -restart - If cluster services were stopped in step 3, restart cluster services.
$ clstartstop -start -n <cluster_name> -m <hostname> - Verify that the update was successful by checking the results of the updateios command and by running the isolevel command, which should indicate that the ioslevel is now 2.2.2.1.
$ioslevel
Applying updates from the CD/DVD drive
This Update Release can be burned onto a CD by using the ISO image file(s). After the CD has been created, follow these steps.
The current level of the VIOS must be 2.1.0 or later
- Log in to the VIOS as the user padmin
- If you use one or more File Backed Optical Media Repositories, you need to unload media images before you apply the Update Release. See details here..
- If you use Shared Storage Pools, then Shared Storage Pool Services must be stopped.
# clstartstop -stop -n <cluster_name> -m <hostname> - Place the CD-ROM into the drive assigned to VIOS
- Commit previous updates, by running the updateios command
$ updateios -commit - Verify the updates files that were copied, this step can only be preformed, if the link to openssl was created
$ chmod 777 <directory_path>/ck_sum.bff
$ cp <directory_path>/ck_sum.bff /home/padmin
$ ck_sum.bff <directory_path>
If there are missing updates or incomplete downloads, an error message is displayed. - Apply the update by running the following update command.
$ updateios -accept -install -dev /dev/cdX
where X is the device number 0-N assigned to VIOS - Run the following command to set authorization for padmin.
$ swrole - PAdmin - To load all changes, reboot the VIOS as user padmin .
$ shutdown -restart - If cluster services were stopped in step 3, restart cluster services.
$ clstartstop -start -n <cluster_name> -m <hostname> - Verify that the update was successful by checking the results of the updateios command and by running the isolevel command, which should indicate that the ioslevel is now 2.2.2.1.
Post-installation information and instructions
How to check for an incomplete installation caused by a loaded media repository
After installing an Update Release, you can use this method to determine if you have encountered the problem of a loaded media library.
Check the Media Repository by running this command:
$ lsrep
If the command reports: "Unable to retrieve repository date due to incomplete repository structure," then you have likely encountered this problem during the installation. The media images have not been lost and are still present in the file system of the virtual media library. Running the lsvopt command should show the media images.
How to recover from an incomplete installation caused by a loaded media repository
To recover from this type of installation failure, unload any media repository images, and then reinstall the ios.cli.rte package. Follow these steps:
- Unload any media images
$ unloadopt -vtd <file-backed_virtual_optical_device> - If you have not yet restarted the VIOS, restart it now. You must restart before you can run the installp command in the next step.
$ shutdown -restart - Reinstall the ios.cli.rte fileset by running the following commands.
To escape the restricted shell:
$ oem_setup_env
To install the failed fileset:
# installp -Or -agX ios.cli.rte
To return to the restricted shell:
# exit - Verify that the Media Repository is operational by running this command:
$ lsrep
General package and other additional information
NIM installation information
Using NIM to back up and install the VIOS is supported as follows.
- Always create the SPOT resource directly from the VIOS mksysb image. Do NOT update the SPOT from an LPP_SOURCE.
- Only the updateios command should be used to update the VIOS. For further assistance, refer to the NIM documentation.
- To use NIM, ensure that the NIM Master is at the appropriate level to support the VIOS image. Refer to the following table.
VIOS level is | NIM Master level must be equal to or higher than |
---|---|
Update Release 2.2.2.1 | AIX 6100-08 |
PACKAGE: Update Release 2.2.2.1 (FP 25 SP02)
IOSLEVEL: 2.2.2.1
General package notes
Review the list of fixes included in Update Release 2.2.2.1 (FP 25 SP02).
To take full advantage of all the function available in the VIOS on IBM Systems based on POWER6 or POWER7 technology, it is necessary to be at the latest system firmware level. If a system firmware update is necessary, it is recommended that the firmware be updated before you upgrade the VIOS to V2.2.2.1
Microcode or system firmware downloads for Power Systems (Fix Central)
Update Release 2.2.2.1 includes the IVM code, but it will not be enabled on HMC-managed systems. Update Release 2.2.2.1, like all VIOS Update Releases, can be applied to either HMC-managed or IVM-managed VIOS.
Update Release 2.2.2.1 updates your VIOS partition to ioslevel V2.2.2.1. To determine if Update Release 2.2.2.1 is already installed, run the following command from the VIOS command line:
# ioslevel
If Update Release 2.2.2.1 is installed, the command output is V2.2.2.1 .
Installing the latest version of Tivoli TSM
This release of VIOS contains several enhancements. These enhancements are in the area of POWER virtualization. The following list provides the features of each element by product area.
Note: Version 6.1.0, the previous version of Tivoli TSM, is still shipped and installed from the VIOS installation DVD.
Tivoli TSM version 6.2.2
The Tivoli TSM filesets are now being shipped on the VIOS Expansion Pack, with the required GSKit8 libraries.
The following are sample installation instructions for the new Tivoli TSM filesets:
- Insert the VIOS Expansion DVD into the DVD drive, that is assigned to the VIOS partition.
- List Contents of the VIOS Expansion DVD
$ updateios -list -dev /dev/cd0
Fileset Name
GSKit8.gskcrypt32.ppc.rte 8.0.14.7
GSKit8.gskcrypt64.ppc.rte 8.0.14.7
GSKit8.gskssl32.ppc.rte 8.0.14.7
GSKit8.gskssl64.ppc.rte 8.0.14.7
..
tivoli.tsm.client.api.32bit 6.2.2.0 tivoli.tsm.client.api.64bit 6.2.2.0
.. - Install Tivoli TSM filesets
$ updateios -fs tivoli.tsm.client.api.32bit -dev /dev/cd0
NOTE: Any prerequisite filesets will be pulled in from Expansion DVD. For TSM, this includes GSKit8.gskcrypt - If needed, install additional TSM filesets
$ updateios -fs tivoli.tsm.client.ba.32bit -dev /dev/cd0 - Verify TSM installed by listing software installed.
$ lssw
Sample output:
..
tivoli.tsm.client.api.32bit 6.2.2.0 C F TSM Client - Application Programming Interface
Known issues in this release
VIOS Shared Storage Pool requirements and limitations
The following requirements and limitations apply to virtual storage enhancements, including Shared Storage Pool. These requirements and limitations do not apply to other non-Shared Storage Pool (SSP) VIOS functions.
Requirements for Shared Storage Pool
- Platforms: POWER6 and POWER7 processor-based servers only (includes Blades)
- System requirements per SSP node:
- Minimum CPU: 1 and 1 physical CPU of entitlement
- Minimum memory: 4GB
- Storage requirements per SSP cluster (minimum): 1 fiber-channel attached disk for repository, 10 GB
- At least 1 fiber-channel attached disk for data, 10GB
- All storage devices (repository and pool) should be allocated on Hardware RAIDed storage for redundancy.
Limitations for Shared Storage Pool
Software Installation
- All VIOS nodes must be at version 2.2.1.3 or later.
- When you install updates to a VIOS at version 2.2.2.1 participating in a Shared Storage Pool, the Shared Storage Pool Services must be stopped.
SSP Configuration
Feature | Min | Max |
---|---|---|
Number of VIOS Nodes in Cluster | 1 | 16 |
Number of Physical Disks in Pool | 1 | 1024 |
Number of Virtual Disks (LUs) Mappings in Pool | 1 | 8192 |
Number of Client LPARs per VIOS node | 1 | 200 |
Capacity of Physical Disks in Pool | 5GB | 16TB |
Storage Capacity of Storage Pool | 10GB | 512TB |
Capacity of a Virtual Disk (LU) in Pool | 1GB | 4TB |
Number of Repository Disks | 1 | 1 |
- Maximum number of physical volumes that can be added to or replaced from a pool at one time: 64
- The Shared Storage Pool cluster name must be less than 63 characters long.
- The Shared Storage Pool pool name must be less than 127 characters long.
Network Configuration
- Uninterrupted network connectivity is required for operation. That is, the network interface used for Shared Storage Pool configuration must be on a highly reliable network that is not congested.
- A Shared Storage Pool configuration can use IPV4 or IPV6, but not a combination of both.
- A Shared Storage Pool configuration should configure the TCP/IP resolver routine for name resolution to resolve host names locally first, and then use the DNS. For step by step instructions, refer to the TCP/IP name resolution documentation in the AIX Information Center.
- The forward and reverse lookup should resolve to the IP address/hostname that is used for Shared Storage Pool configuration.
- It is recommended that the VIOSs that are part of the Shared Storage Pool configuration keep their clocks synchronized.
Storage Configuration
- Physical Disks in the SAN Storage subsystem assigned to the Shared Storage Pool cannot be resized.
- Virtual SCSI devices provisioned from the Shared Storage Pool may drive higher CPU utilization than the classic Virtual SCSI devices.
- Using SCSI reservations (SCSI Reserve/Release and SCSI-3 Reserve) for fencing physical disks in the Shared Storage pool is not supported.
- High availability SAN solutions should be utilized to mitigate outages.
- SANCOM will not be supported in a Shared Storage Pool environment.
Shared Storage Pool and other VIOS capabilities
- Virtual SCSI disk is the peripheral device type supported by SSP at this time.
- VIOSs configured for SSP require that Shared Ethernet Adapter(s) (SEA) be set up for Threaded mode (the default mode). SEA in Interrupt Mode is not supported within SSP.
- VIOSs configured for SSP can be used as a Paging Space Partition (PSP), but the storage for the PSP paging spaces must come from logical devices not within a Shared Storage Pool. Using a VIOS SSP logical unit (LU) as an Active Memory Sharing (AMS) paging space or as the suspend/resume file is not supported. Also, Suspend/Resume and Remote Restart features for client LPARs backed by VIOS SSP LUs are not supported.
- When you create Virtual SCSI Adapters for VIOS LPARs, the option "Any client partition can connect" is not supported.
Fixes included in this release
This update release contains fixes for the following:
- Fixed issues in LHEA driver
- Fixed issues with MPIO drivers
- Fixed issues with Storage Framework driver
- Fixed issue with cluster services reporting disks as down even after paths have recovered
- Fixed problems with ethernet device driver
- Fixed issues with Qlogic FCoE target mode
- Fixed issues with LPM validation using NPIV
- Fixed Emulex Target Mode problems
- Fixed issues with SEA
- Fixed problems in Trusted Logging
- Fixed issues with SSP LU create and map functions
- Fixed issues with viosbr command
- Fixed lsmap issue where all vhosts were not listed
- Fixed issue with installing filesets from DVD
- Fixed issue with Padmin user unable to read XNTD log files
- Fixed issue swith ioscli snapshot command
- Fixed issue with viosecure
- Fixed RBAC issues with oem_setup_env
- Fixed problem with cluster -list not showing correct clustername
- Fixed issues with chsp command
- Fixed issues with LPM migration
Was this topic helpful?
Document Information
Modified date:
19 February 2022
UID
hpc1vios117f5701