Fix Readme
Abstract
xxx
Content
Readme file for: VIOS 2.2.1.3 FixPack 25 SP01
Product/Component Release: 2.2.1.0
Update Name: VIOS 2.2.1.3 FixPack 25 SP 01
Fix ID: VIOS_2.2.1.3-FP25-SP01
Publication Date: 14 Dec 2011
Last modified date: 14 Dec 2011
Contents
Installation information
Download location
Below is a list of components, platforms, and file names that apply to this Readme file.
Product/Component Name: | Platform: | Fix: |
---|---|---|
Virtual I/O Server | VIOS 2.2.1.3 | VIOS_2.2.1.3-FP25-SP01 |
Known issues
VIOS Shared Storage Pool requirements and limitations
The following requirements and limitations apply to enhancements to virtual storage, including Shared Storage Pool. These requirements and limitations do not apply to other non-Shared Storage Pool (SSP) VIOS functions.
Requirements for Shared Storage Pool
- Platforms: Power6 and Power7 only (includes Blades)
- System requirements per SSP node:
- Minimum CPU: 1 and 1 physical CPU of entitlement
- Minimum memory: 4GB
- Storage requirements per SSP cluster (minimum): 1 fiber-channel attached disk for repository, 10 GB
- At least 1 fiber-channel attached disk for data, 10GB
- All storage devices (repository and pool) should be allocated on Hardware RAIDed storage for redundancy.
Limitations for Shared Storage Pool
Software Installation
- VIOS Software updates must be done while all clients utilizing storage in the Shared Storage Pool are shut down.
- All VIOS nodes must be at version 2.2.1.3 or later.
- When installing updates to a VIOS at version 2.2.1.3 participating in a Shared Storage Pool, the Shared Storage Pool Services must be stopped. This means loss of access to the shared storage pool through that VIOS for the client partitions. The ioscli command clstartstop can be used to stop the cluster services before installation. The clstartstop can be rerun later, to restart the cluster services on that vios after reboot.
- The Shared Storage Pool cluster name and the pool name must be less than 16 characters long.
- The viosbr -migrate option is supported in VIOS 2.2.1.3 if APAR IV11852m13 is applied.
SSP Configuration
Feature | Min | Max |
---|---|---|
Number of VIOS Nodes in Cluster | 1 | 4 |
Number of Physical Disks in Pool | 1 | 256 |
Number of Virtual Disks (LUs) Mappings in Pool | 1 | 1024 |
Number of Client LPARs per VIOS node | 1 | 40 |
Capacity of Physical Disks in Pool | 5GB | 4TB |
Storage Capacity of Storage Pool | 10GB | 128TB |
Capacity of a Virtual Disk (LU) in Pool | 1GB | 4TB |
Number of Repository Disks 1 1 |
Maximum number of physical volumes that can be added to or replaced from a pool at one time: 64
Network Configuration
- Uninterrupted network connectivity is required for operation. i.e. The network interface used for Shared Storage Pool configuration must be on a highly reliable network which is not congested.
- Changing the hostname/IP address for a system is not supported when configured in a Shared Storage Pool.
- IPV4 Compliance only
- When a VIOS is configured for a Shared Storage Pool environment, VLAN tagging will not be supported.
- A Shared Storage Pool configuration should configure the TCP/IP resolver routine for name resolution to resolve host names locally first, and then use the DNS. For step by step instructions, refer to the TCP/IP name resolution documentation in the AIX Information Center.
- When restoring VIOS LPAR configuration(s) from a viosbr backup, all network devices and configuration(s) must be restored before Shared Storage Pool configurations are restored.
- The system must be configured with the fully qualified domain name. As an example the hostname command should report the system hostname as "mydivision.mycompany.com".
- The hostname/IP address provided for Shared Storage Pool configuration must resolve to the fully qualified domain name.
- The forward and reverse lookup should resolve to the IP address/hostname that is used for Shared Storage Pool configuration.
- It is recommended that the VIOSs that are part of the Shared Storage Pool configuration keep their clocks synchronized.
Storage Configuration
- Uninterrupted access to the repository disk is required for operation.
- Physical Disks in the SAN Storage subsystem assigned to the Shared Storage Pool cannot be resized.
- Virtual SCSI devices provisioned from the Shared Storage Pool may drive higher CPU utilization than the classic Virtual SCSI devices.
- Using SCSI reservations (SCSI Reserve/Release and SCSI-3 Reserve) for fencing physical disks in the Shared Storage pool is not supported.
- High availability SAN solutions should be utilized to mitigate outages.
- SANCOM will not be supported in a Shared Storage Pool environment.
Shared Storage Pool and other VIOS capabilities
- Virtual SCSI disk is the peripheral device type supported by SSP at this time.
- VIOSs configured for SSP require that Shared Ethernet Adapter(s) (SEA) be setup for Threaded mode (the default mode). SEA in Interrupt Mode is not supported within SSP.
- VIOSs configured for SSP can be used as a Paging Space Partition (PSP), but the storage for the PSP paging spaces must come from logical devices not within a Shared Storage Pool. Using a VIOS SSP logical unit (LU) as an Active Memory Sharing (AMS) paging space or as the suspend/resume file is not supported. Also, Suspend and Resume feature for client LPARs backed by VIOS SSP LUs is not supported.
- When creating Virtual SCSI Adapters for VIOS LPARs, the option "Any client partition can connect" is not supported.
- If the storage pool name or cluster name exceeds 16 characters, APAR IV11852m13 needs to be applied.
Installation information
Prior to installation
NOTE: Minimum disc space requirement has changed for Release 2.2.0 and beyond.
Please ensure that your rootvg contains at least 30GB before you attempt to upgrade to Update Release 2.2.1.3 (FP 25 SP01).
If you are planning to update a version of VIOS lower than 2.1, you must first migrate your VIOS to VIOS version 2.1.0 using the Migration DVD. After the VIOS is at version 2.1.0, the Update Release can be applied to bring the VIOS to the latest level, 2.2.1.1 (FP 25). After a VIOS is at the 2.2.1.1 level, it can then be upgraded to the VIOS Service Release 2.2.1.3 (FP 25 SP01).
After the VIOS migration is complete, from 1.X to 2.X, you must set the Processor Folding, described here under Migration DVD:
Virtual I/O Server support for Power Systems
While the above process is the most straightforward for users, you should note that with this Update Release version 2.2.1.1, a single boot alternative to this multiple step process is available to NIM users. NIM users can update by creating a single, merged lpp_source that combines the contents of the Migration DVD with the contents of this Update Release 2.2.1.1 and the Service Release 2.2.1.3.
A single, merged lpp_source is not supported for VIOS that use SDDPCM. However, if you use SDDPCM, you can still enable a single boot update by using the alternate method described at the following location:
SDD and SDDPCM migration procedures when migrating VIOS from version 1.x to version 2.x
Before installing the Update Release
How to check for a loaded media repository, and then unload it.
- To check for loaded images, run the following command:
& lsvopt
The Media column lists any loaded media. - To unload media images. run the following commands on all VTDs that have loaded images.
& unloadopt -vtd <file-backed_virtual_optical_device > - To verify that all media are unloaded, run the following command again.
& lsvopt
The command output should show No Media for all VTDs.
Migrating Shared Storage Pool configuration
If your current VIOS is configured with Shared Storage Pool, you must follow these steps.
Migrating cluster configuration from the version supported in 2.2.1.1 to the current version VIOS 2.2.1.3 is not supported by this service pack. You must apply APAR IV11852m13 to this service pack for that functionality.
The cluster that is created and configured on VIOS Version 2.2.1.1 can be migrated to Version 2.2.1.3 or later with Interim Fix APAR IV11853m13 installed. This migration allows you to keep your shared storage pool devices. If this procedure is not followed and the cluster is running, the update will fail.
Note: If you choose to destroy the devices supported by the storage pool, you can delete the cluster and skip these steps. However, you will not be able to recover client data, except from existing backups, taken from client partitions.
Follow these steps:
- Close all devices that are mapped to the shared storage pool, which may entail shutting down clients.
- As user padmin, create a backup of tthe old version cluster on a VIOS at Version 2.2.1.1.
$ viosbr -backup -file oldCluster -clustername clusterA - Save the generated backup file, oldCluster.clusterA.tar.gz, on different system.
- List devices and note the storage with the name caa_private0. This volumn group is the repository disk.
- Run chkdev on this device and note the IDENTIFIER, so that it can be found after reboot. (The device name may change and the IDENTIFIER field can be used to verify that the name is preserved or to find the repository disk.)
Example:
$ chkdev -dev hdisk1
NAME: hdisk1
IDENTIFIER: 200B75Y4191009107210790003IBMfcp
PHYS2VIRT_CAPABLE: YES
VIRT2NPIV_CAPABLE: NA
VIRT2PHYS_CAPABLE: NA - List devices by using the lscluster -d command to determin devices that are part of the Shared Storage Pool
Example:
lscluster -d | moreStorage Interface Query Cluster Name: cluster_hina Cluster uuid: 442b3de4-10cb-11e1-9dd1-e41f13205e94 Number of nodes reporting = 2 Number of nodes expected = 2 Node hinav1. austin.ibm.com Node uuid = 43e8fa92-10cb-11e1-9dd1-e41f13205e94 Number of disk discovered = 3 hdisk9 state : UP uDid : 200B75Y4191165907210790003IBMfcp uUid : 1fdda09f-e2da-cba6-ad4f-dc8e02ab860a type : CLUSDISK hdisk8 state : UP uDid : 200B75Y4191164907210790003IBMfcp uUid : 4d51499f-1721-98f9-ec3c-5a669f7b2cec type : CLUSDISK hdisk1 state : UP uDid : uUid : 8743c7fa-a85f-11d4-833c-fd67f8eba2d9 type : REPDISK
The CLUSDISK devices are the shared storage pool devices. - Run chkdev on the devices identified in the previous step and note the IDENTIFIER. These devices have your client data and should not be reallocated after the install.
- Update the VIOS system with Version 2.2.1.3 or later.
Note: The physical volumes used for the storage pool should remain the same and their contents should not to be altered. - Install the appropriate interim fix for migrating Shared Storage Pool configuration. Reboot the VIOS.
- After the installation of VIOS Version 2.2.1.3 and the interim fix is completed, convert the backup file created in Step 2 to the new format. This action generates a migrated backup file, in gz format.
Example: oldCluster_MIGRATED.clusterA.tar.gz
$ viosbr -migrate -file oldCluster.clusterA.tar.gz - Clean the physical volume associated with the repository disk that has the matching IDENTIFIER from step 5. If the device name no longer exists or is in the defined state, you can use the chkdev command to identify the correct physical volume.
Example:
$ chkdev -dev hdisk9
NAME: hdisk9
IDENTIFIER: 200B75Y4191009107210790003IBMfcp
PHYS2VIRT_CAPABLE: YES
VIRT2NPIV_CAPABLE: NA
VIRT2PHYS_CAPABLE: NA
Run the cleandisk command on the device found from Step 11, in this example hdisk9. Do not run this command on other devices.
$ cleandisk -r hdisk9 - Restore the cluster using the migrated backup file from Step 10, specifying the physical volume from Step 11, in this example, hdisk9.
$ viosbr -restore -file oldCluster_MIGRATED.clusterA.tar.gz -clustername clusterA -repopvs hdisk9
After a successful restore, cluster and all Shared Storage Pool mappings are configured as earlier. - To verify that the above cluster restored successfully, list the nodes in the cluster.
$ cluster -listnode -clustername clusterA - List the storage mappings on the VIOS.
$ lsmap -all
Installing
Use one of the following methods to install the latest VIOS Update Release.
Applying updates from a local hard disk
To apply the updates from a directory on your local hard disk, follow these steps.
The current level of the VIOS must be 2.2.1.1
- Log in to the VIOS as the user padmin.
- If you use one or more File Backed Optical Media Repositories, you need to unload media images before you apply the Update Release. See details here..
- If you use Shared Storage Pools, then you must follow specific steps, as outlined under Migrate Shared Storage Pool.
- Create a directory on the Virtual I/O Server.
$ mkdir - Using ftp, transfer the update file(s) to the directory you created.
- Commit previous updates by running the updateios command
$ updateios -commit - Apply the update by running the updateios command
$ updateios -accept -install -dev - To load all changes, reboot the VIOS as user padmin .
$ shutdown -restart - After the VIOS has rebooted, you are required to accept the license.
$ license -accept - Verify that the update was successful by checking the results of the updateios command and by running the isolevel command, which should indicate that the ioslevel is now 2.2.1.3.
$ioslevel - If you need to restore your cluster, follow the steps in Migrate Shared Storage Pool, step 10.
Applying updates from a remotely mounted file system
If the remote file system is to be mounted read-only, follow one of these steps.
The current level of the VIOS must be 2.2.1.1
- Log in to the VIOS as the user padmin.
- If you use one or more File Backed Optical Media Repositories, you need to unload media images before you apply the Update Release. See details here..
- If you use Shared Storage Pools, then you must follow specific steps, as outlined under Migrate Shared Storage Pool.
- Mount the remote directory onto the Virtual I/O Server.
$ mount remote_machine_name:directory /mnt - Commit previous updates, by running the updateios command
$ updateios -commit - Apply the update by running the updateios command
$ updateios -accept -install -dev /mnt - To load all changes, reboot the VIOS as user padmin .
$ shutdown -restart - Verify that the update was successful by checking the results of the updateios command and by running the isolevel command, which should indicate that the ioslevel is now 2.2.1.3.
$ioslevel - If you need to restore your cluster, follow the steps in Migrate Shared Storage Pool, step 10.
Applying updates from the CD/DVD drive
This fix pack can be burned onto a CD by using the ISO image file(s). After the CD has been created, follow these steps.
The current level of the VIOS must be 2.2.1.1
- Log in to the VIOS as the user padmin
- If you use one or more File Backed Optical Media Repositories, you need to unload media images before you apply the Update Release. See details here..
- If you use Shared Storage Pools, then you must follow specific steps, as outlined under Migrate Shared Storage Pool.
- Place the CD-ROM into the drive assigned to VIOS
- Commit previous updates, by running the updateios command
$ updateios -commit - Apply the update by running the following updateios command:
$ updateios -accept -install -dev /dev/cdX
where X is the device number 0-N assigned to VIOS - To load all changes, reboot the VIOS as user padmin .
$ shutdown -restart - Verify that the update was successful by checking the results of the updateios command and by running the isolevel command, which should indicate that the ioslevel is now 2.2.1.3.
$ioslevel - If you need to restore your cluster, follow the steps in Migrate Shared Storage Pool, step 10.
Performing the necessary tasks after installation
How to check for an incomplete installation caused by a loaded media repository
After installing an Update Release, you can use this method to determine if you have encountered the problem of a loaded media library.
Check the Media Repository by running this command:
$ lsrep
If the command reports: "Unable to retrieve repository date due to incomplete repository structure," then you have likely encountered this problem during the installation. The media images have not been lost and are still present in the file system of the virtual media library. Running the lsvopt command should show the media images.
How to recover from an incomplete installation caused by a loaded media repository
To recover from this type of installation failure, unload any media repository images, and then reinstall the ios.cli.rte package. Follow these steps:
- Unload any media images
$ unloadopt -vtd <file-backed_virtual_optical_device> - If you have not yet restarted the VIOS, restart it now. You must restart before you can run the installp command in the next step.
$ shutdown -restart - Reinstall the ios.cli.rte fileset by running the following commands.
To escape the restricted shell:
$ oem_setup_env
To install the failed fileset:
# installp -Or -agX ios.cli.rte
To return to the restricted shell:
# exit - Verify that the Media Repository is operational by running this command:
$ lsrep
Additional information
NIM installation information
Using NIM to back up and install the VIOS is supported as follows.
- Always create the SPOT resource directly from the VIOS mksysb image. Do NOT update the SPOT from an LPP_SOURCE.
- Only the updateios command should be used to update the VIOS. For further assistance, refer to the NIM documentation.
- To use NIM, ensure that the NIM Master is at the appropriate level to support the VIOS image. Refer to the following table.
VIOS level is | NIM Master level must be equal to or higher than |
---|---|
Service Pack 01 for Update Release 2.2.1.3 | AIX 6100-07-02 |
PACKAGE: Update Release 2.2.1.3 (FP 25 SP01)
IOSLEVEL: 2.2.1.3
General package notes
Update Release 2.2.1.3 (FP 25 SP01) provides updates to Virtual I/O Server (VIOS) 2.2.1.1 installations. Applying this package will upgrade the VIOS to the latest level, V2.2.1.3.
Review the list of fixes included in Update Release 2.2.1.3 (FP 25 SP01).
To take full advantage of all the function available in the VIOS on IBM Systems based on POWER6 or POWER7 technology, it is necessary to be at the latest system firmware level. If a system firmware update is necessary, it is recommended that the firmware be updated before you upgrade the VIOS to V2.2.1.3
Microcode or system firmware downloads for Power Systems (Fix Central)
Update Release 2.2.1.3 (FP 25 SP01) updates your VIOS partition to ioslevel V2.2.1.3. To determine if Update Release 2.2.1.3 (FP 25 SP01) is already installed, run the following command from the VIOS command line:
# ioslevel
If Update Release 2.2.1.3 (FP 25 SP01) is installed, the command output is V2.2.1.3 .
Installing the fix pack
If the above procedure shows that the VIOS Update Release 2.2.1.3 (FP 25 SP01) is not installed, follow the installation instructions in this Readme to install the Fix Pack.
Note : After you install the Update Release, you must reboot the VIOS Server.
Installing the latest version of Tivoli TSM
This release of VIOS contains several enhancements. These enhancements are in the area of POWER virtualization. The following list provides the features of each element by product area.
Note: Version 6.1.0, the previous version of Tivoli TSM, is still shipped and installed from the VIOS installation DVD.
Tivoli TSM version 6.2.2
The Tivoli TSM filesets are now being shipped on the VIOS Expansion Pack, with the required GSKit8 libraries.
The following are sample installation instructions for the new Tivoli TSM filesets:
- Insert the VIOS Expansion DVD into the DVD drive, that is assigned to the VIOS partition.
- List Contents of the VIOS Expansion DVD
$ updateios -list -dev /dev/cd0
Fileset Name
GSKit8.gskcrypt32.ppc.rte 8.0.14.7
GSKit8.gskcrypt64.ppc.rte 8.0.14.7
GSKit8.gskssl32.ppc.rte 8.0.14.7
GSKit8.gskssl64.ppc.rte 8.0.14.7
..
tivoli.tsm.client.api.32bit 6.2.2.0 tivoli.tsm.client.api.64bit 6.2.2.0
.. - Install Tivoli TSM filesets
$ updateios -fs tivoli.tsm.client.api.32bit -dev /dev/cd0
NOTE: Any prerequisite filesets will be pulled in from Expansion DVD. For TSM, this includes GSKit8.gskcrypt - If needed, install additional TSM filesets
$ updateios -fs tivoli.tsm.client.ba.32bit -dev /dev/cd0 - Verify TSM installed by listing software installed.
$ lssw
Sample output:
..
tivoli.tsm.client.api.32bit 6.2.2.0 C F TSM Client - Application Programming Interface
List of fixes
This update release contains fixes for the following:
- Issues with Cluster Aware AIX software
- Issues with lscluster command
- Potential TCE leak problem
- Issues with storage framework
- Issues with Cluster services showing disks as down after path recovery
- Issues with NPIV client adapter reconnect after VIOS dump
- Issues with VSCSI client driver
- Issues with LU create and map functions
- Issues with failed node restore
- Issues with restoration of mappings using viosbr command
- Concurrency issues related to database
- Issue with mkvdev validation for VTD names
- Issue with padmin user unable to read xntpd log files
- Issues with viosbr command
- Issues with viosecure
- Authorizations for oem_setup_env
- Scalability and link issues with SEA driver
- Issues with SEA thread queue overflow
- Issues with updateios command
- Potential issue when removing SEA
For other fixes included in this Fix Pack, refer to the Cumulative fix history for VIOS Version 2.1.
Document change history
Date | Description of change |
Was this topic helpful?
Document Information
Modified date:
19 February 2022
UID
isg400000876