Fix Readme
Abstract
Downloads for Workload Partition Manager for AIX
Content
IBM PowerVM Virtual I/O Server
Contents
This Readme contains installation and other information about VIOS Update Release 2.2.3.4
Package information
PACKAGE: Update Release 2.2.3.4
IOSLEVEL: 2.2.3.4
VIOS level is | NIM Master level must be equal to or higher than |
---|---|
Update Release 2.2.3.4 | AIX 6100-09-04 or AIX 7100-03-04 |
General package notes
Review the list of fixes included in Update Release 2.2.3.4.
To take full advantage of all the function available in the VIOS, it may be necessary to be at the latest system firmware level. If a system firmware update is necessary, it is recommended that the firmware be updated before you update the VIOS to Update Release 2.2.3.4.
Microcode or system firmware downloads for Power Systems
The VIOS update Release 2.2.3.4 includes the IVM code, but it will not be enabled on HMC-managed systems. Update Release 2.2.3.4, like all VIOS Update Releases, can be applied to either HMC-managed or IVM-managed VIOS.
Update Release 2.2.3.4 updates your VIOS partition to ioslevel 2.2.3.4. To determine if Update Release 2.2.3.4 is already installed, run the following command from the VIOS command line:
$ ioslevel
If Update Release 2.2.3.4 is installed, the command output is 2.2.3.4.
Known Capabilities and Limitations
Limitations for PowerVM functionality
The following PowerVM functionality has been excluded from the initial introduction of the following Power machine model types 8286–41A, 8286–42A, 8247–21L, and 8247–22L.
- Suspend/Resume or hibernation of a LPAR
- Live Partition Mobility when used with Active Memory Sharing configurations
- Live Partition Mobility for IBM i LPARs
The following PowerVM functionality has been excluded from the initial introduction of the following Power machine model types 9119–MME(E870), and 9119–MHE(E880).
- Suspend/Resume or hibernation of a LPAR
Capabilities and limitations for Shared Storage Pool
The following requirements and limitations apply to Shared Storage Pool (SSP) features and any associated virtual storage enhancements.
Requirements for Shared Storage Pool
- Platforms: Power6 and later (includes Blades), IBM PureFlex Systems (Power Compute Nodes only)
- System requirements per SSP node:
- Minimum CPU: 1 CPU of guaranteed entitlement
- Minimum memory: 4GB
- Storage requirements per SSP cluster (minimum): 1 fiber-channel attached disk for repository, 1 GB
- At least 1 fiber-channel attached disk for data, 10GB
Limitations for Shared Storage Pool
Software Installation
- All VIOS nodes must be at version 2.2.1.3 or later.
- When installing updates for VIOS Update Release 2.2.3.4 participating in a Shared Storage Pool, the Shared Storage Pool Services must be stopped on the node being upgraded.
Feature | Min | Max |
---|---|---|
Number of VIOS Nodes in Cluster | 1 | 16 |
Number of Physical Disks in Pool | 1 | 1024 |
Number of Virtual Disks (LUs) Mappings in Pool | 1 | 8192 |
Number of Client LPARs per VIOS node | 1 | 200 |
Capacity of Physical Disks in Pool | 10GB | 16TB |
Storage Capacity of Storage Pool | 10GB | 512TB |
Capacity of a Virtual Disk (LU) in Pool | 1GB | 4TB |
Number of Repository Disks | 1 | 1 |
Capacity of Repository Disk | 512MB | 1016GB |
- Maximum number of physical volumes that can be added to or replaced from a pool at one time: 64
- The Shared Storage Pool cluster name must be less than 63 characters long.
- The Shared Storage Pool pool name must be less than 127 characters long.
- The supported LU size is 4TB. However, it is recommended to limit the size of individual luns to 16 GB for optimal performance in cases where all of the following conditions are met:
- The server generates a random access pattern for the I/O device.
- There are more than 8 processes concurrently performing I/O.
- The performance of the application is dependent on the I/O subsystem throughput.
Network Configuration
- Uninterrupted network connectivity is required for operation. i.e. The network interface used for Shared Storage Pool configuration must be on a highly reliable network which is not congested.
- A Shared Storage Pool configuration can use IPV4 or IPV6, but not a combination of both.
- A Shared Storage Pool configuration should configure the TCP/IP resolver routine for name resolution to resolve host names locally first, and then use the DNS. For step by step instructions, refer to the TCP/IP name resolution documentation in the IBM Knowledge Center.
- The forward and reverse lookup should resolve to the IP address/hostname that is used for Shared Storage Pool configuration.
- It is recommended that the VIOSs that are part of the Shared Storage Pool configuration keep their clocks synchronized.
Storage Configuration
- Virtual SCSI devices provisioned from the Shared Storage Pool may drive higher CPU utilization than the classic Virtual SCSI devices.
- Using SCSI reservations (SCSI Reserve/Release and SCSI-3 Reserve) for fencing physical disks in the Shared Storage pool is not supported.
- SANCOM will not be supported in a Shared Storage Pool environment.
Shared Storage Pool capabilities and limitations
- On the client LPAR Virtual SCSI disk is the only peripheral device type supported by SSP at this time.
- When creating Virtual SCSI Adapters for VIOS LPARs, the option "Any client partition can connect" is not supported.
- VIOSs configured for SSP require that Shared Ethernet Adapter(s) (SEA) be setup for Threaded mode (the default mode). SEA in Interrupt Mode is not supported within SSP.
- VIOSs configured for SSP can be used as a Paging Space Partition (PSP), but the storage for the PSP paging spaces must come from logical devices not within a Shared Storage Pool. Using a VIOS SSP logical unit (LU) as an Active Memory Sharing (AMS) paging space or as the suspend/resume file is not supported.
Installation information
Pre-installation information and instructions
Please ensure that your rootvg contains at least 30GB and that there is at least 4GB free space before you attempt to update to Update Release 2.2.3.4.
Run the lsvg rootvg command, and then ensure there is enough free space.
Example:
$ lsvg rootvg VOLUME GROUP: rootvg VG IDENTIFIER: 00f6004600004c000000014306a3db3d VG STATE: active PP SIZE: 64 megabyte(s) VG PERMISSION: read/write TOTAL PPs: 511 (32704 megabytes) MAX LVs: 256 FREE PPs: 64 (4096 megabytes) LVs: 14 USED PPs: 447 (28608 megabytes) OPEN LVs: 12 QUORUM: 2 (Enabled) TOTAL PVs: 1 VG DESCRIPTORS: 2 STALE PVs: 0 STALE PPs: 0 ACTIVE PVs: 1 AUTO ON: yes MAX PPs per VG: 32512 MAX PPs per PV: 1016 MAX PVs: 32 LTG size (Dynamic): 256 kilobyte(s) AUTO SYNC: no HOT SPARE: no BB POLICY: relocatable PV RESTRICTION: none INFINITE RETRY: no
Upgrading from VIOS version lower than 2.1.0
If you are planning to update a version of VIOS lower than 2.1, you must first migrate your VIOS to VIOS version 2.1.0 using the Migration DVD. After the VIOS is at version 2.1.0, the Update/Fixpack 2.2.3.1 must be applied to bring the VIOS to the latest Fix Pack VIOS 2.2.3.1 level. The Update Release 2.2.3.4 can then be applied to bring the VIOS to the latest level.
Note that with this Update Release 2.2.3.4, a single boot alternative to this multiple step process is available to NIM users. NIM users can update by creating a single, merged lpp_source that combines the contents of the Migration DVD with the contents of this Update Release 2.2.3.4 along with the Fix Pack 2.2.3.1.
A single, merged lpp_source is not supported for VIOS that uses SDDPCM. However, if you use SDDPCM, you can still enable a single boot update by using the alternate method described at the following location:
SDD and SDDPCM migration procedures when migrating VIOS from version 1.x to version 2.x
After the VIOS migration is complete, from 1.X to 2.X, you must set the Processor Folding, described here, under "Migration DVD":
Instructions to disable Processor Folding are detailed at the link below.
See the "Migration DVD" section:
Virtual I/O Server support for Power Systems
Upgrading from VIOS version 2.1.0 and above
VIOS Update Release 2.2.3.4 may be applied to VIOS level between 2.2.3.0 and 2.2.3.3. When the Single Step update procedure is used the current level of VIOS must be between 2.2.1.1 and 2.2.2.x.
NOTE: In order to update to Update Release 2.2.3.4 from a level between 2.2.1.1 and 2.2.3.1 in a single step, you can put the 2.2.3.1 and 2.2.3.4 updates in the same location and do the update using the updateios command.
Before installing the VIOS Update Release 2.2.3.4
The update could fail if there is a loaded media repository.
Checking for a loaded media repository
To check for a loaded media repository, and then unload it, follow these steps.
- To check for loaded images, run the following command:
$ lsvopt
The Media column lists any loaded media. - To unload media images, run the following commands on all Virtual Target Devices that have loaded images.
$ unloadopt -vtd <file-backed_virtual_optical_device >
- To verify that all media are unloaded, run the following command again.
$ lsvopt
The command output should show No Media for all VTDs.
Migrate Shared Storage Pool Configuration
The Virtual I/O Server (VIOS) Version 2.2.2.1 or later, supports rolling updates for SSP clusters. The VIOS can be updated to Update Release 2.2.3.4 using rolling updates.
If your current VIOS is configured with Shared Storage Pools from 2.2.1.1 or 2.2.1.3, the following information applies:
A cluster that is created and configured on earlier VIOS Version 2.2.1.1 or 2.2.1.3 must be migrated to version 2.2.1.4 or 2.2.1.5 prior to utilizing rolling updates. This allows the user to keep their shared storage pool devices. When VIOS version is equal or greater than 2.2.1.4 and less than 2.2.3.1, the user needs to download 2.2.3.1 and 2.2.3.4 update images into the same directory, then update the VIOS to Update Release 2.2.3.4 using rolling updates.
If your current VIOS is configured with Shared Storage Pool from 2.2.1.4 or later, the following information applies:
The rolling updates enhancement allows the user to apply Update Release 2.2.3.4 to the VIOS logical partitions in the cluster individually without causing an outage in the entire cluster. The updated VIOS logical partitions cannot use the new SSP capabilities until all VIOS logical partitions in the cluster are updated.
To upgrade the VIOS logical partitions to use the new SSP capabilities, ensure that the following conditions are met:
- All VIOS logical partitions must have VIOS Update Release version 2.2.1.4 or later installed. After the update, you can verify that the logical partitions have the new level of software installed by typing the cluster -status -verbose command from the VIOS command line. In the Node Upgrade Status field, if the status of the VIOS logical partition is displayed as UP_LEVEL , the software level in the logical partition is higher than the software level in the cluster. If the status is displayed as ON_LEVEL , the software level in the logical partition and the cluster is the same.
- All VIOS logical partitions must be running. If any VIOS logical partition in the cluster is not running, the cluster cannot be upgraded to use the new SSP capabilities.
The VIOS SSP software monitors node status and will automatically upgrade the cluster to make use of the new capabilities. When all the nodes in the cluster have been updated, "cluster -status -verbose" reports "ON_LEVEL" .
Installing the Update Release
There is now a method to verify the VIOS update files before installation. This process requires access to openssl by the 'padmin' User, which can be accomplished by creating a link.
To verify the VIOS update files, follow these steps:
$ oem_setup_env
Create a link to openssl
# ln -s /usr/bin/openssl /usr/ios/utils/openssl
Verify the link to openssl was created
# ls -alL /usr/bin/openssl /usr/ios/utils/openssl
Both files should display similar owner and size
# exit
Use one of the following methods to install the latest VIOS Service Release. As with all maintenance, you should create a VIOS backup before making changes.
If you are running a Shared Storage Pool configuration, you must follow the steps in Migrate Shared Storage Pool Configuration.
Note : While running 'updateios' in the following steps, you may see accessauth messages, but these messages can safely be ignored.
If your current level is between 2.2.1.1 and 2.2.2.1, you have to put the 2.2.3.1 and 2.2.3.4 updates in the same location to apply updates in one step. The one step approach fixes an update problem with the builddate on bos.alt_disk_install.boot_images fileset.
If your current level is 2.2.2.1, 2.2.2.2, 2.2.2.3, or 2.2.3.1, you need to run updateios command twice to get bos.alt_disk_install.boot_images fileset update problem fixed.
Run the following command after the step of "$ updateios –accept –install –dev <directory_name >" completes.
$ updateios –accept –dev <directory_name >
Applying updates from a local hard disk
WARNING: If the target node to be updated is part of a redundant VIOS pair, ensure that the VIOS partner node is fully operational before beginning to update the target node. NOTE that for VIOS nodes that are part of an SSP cluster, the partner node must be shown in 'cluster -status ' output as having a cluster status of OK and a pool status of OK. If the target node is updated before its VIOS partner is fully operational, client LPARs may crash.
To apply the updates from a directory on your local hard disk, follow these steps.
The current level of the VIOS must be 2.2.2.1 or later if you use the Share Storage Pool .
- Log in to the VIOS as the user padmin.
- If you use one or more File Backed Optical Media Repositories, you need to unload media images before you apply the Update Release. See details here.
- If you use Shared Storage Pools, then Shared Storage Pool Services must be stopped.
$ clstartstop -stop -n <cluster_name > -m <hostname >
- Create a directory on the Virtual I/O Server.
$ mkdir <directory_name >
- Using ftp, transfer the update file(s) to the directory you created.
- Commit previous updates by running the updateios command
$ updateios -commit
- Verify the updates files that were copied. This step can only be performed if the link to openssl was created.
$ cp <directory_path >/ck_sum.bff /home/padmin
$ chmod 755 </home/padmin>/ck_sum.bff
$ ck_sum.bff <directory_path >
If there are missing updates or incomplete downloads, an error message is displayed. - Apply the update by running the updateios command
$ updateios -accept -install -dev <directory_name >
- To load all changes, reboot the VIOS as user padmin .
$ shutdown -restart
Note: If shutdown –restart command failed, run swrole –PAdmin in order for padmin to set authorization and establish access to the shutdown command properly.
- If cluster services were stopped in step 3, restart cluster services.
$ clstartstop -start -n <cluster_name > -m <hostname >
- Verify that the update was successful by checking the results of the updateios command and by running the isolevel command, which should indicate that the ioslevel is now 2.2.3.4.
$ ioslevel
Applying updates from a remotely mounted file system
WARNING: If the target node to be updated is part of a redundant VIOS pair, ensure that the VIOS partner node is fully operational before beginning to update the target node. NOTE that for VIOS nodes that are part of an SSP cluster, the partner node must be shown in 'cluster -status ' output as having a cluster status of OK and a pool status of OK. If the target node is updated before its VIOS partner is fully operational, client LPARs may crash.
If the remote file system is to be mounted read-only, follow one of these steps.
The current level of the VIOS must be between 2.2.2.1 or later if you use Share Storage Pool.
$ clstartstop -stop -n <cluster_name > -m <hostname >
Note: If shutdown –restart command failed, run swrole –PAdmin in order for padmin to set authorization and establish access to the shutdown command properly.
- Log in to the VIOS as the user padmin.
- If you use one or more File Backed Optical Media Repositories, you need to unload media images before you apply the Update Release. See details here.
- If you use Shared Storage Pools, then Shared Storage Pool Services must be stopped.
- Mount the remote directory onto the Virtual I/O Server.
$ mount remote_machine_name:directory /mnt
- Commit previous updates, by running the updateios command.
$ updateios -commit
- Verify the updates files that were copied, this step can only be performed if the link to openssl was created.
$ cp /mnt/ck_sum.bff /home/padmin
$ chmod 755 /home/padmin/ck_sum.bff
$ ck_sum.bff /mnt
If there are missing updates or incomplete downloads, an error message is displayed. - Apply the update by running the updateios command.
$ updateios -accept -install -dev /mnt
- To load all changes, reboot the VIOS as user padmin .
$ shutdown -restart
- If cluster services were stopped in Step 3, restart cluster services.
$ clstartstop -start -n <cluster_name > -m <hostname >
- Verify that the update was successful by checking the results of the updateios command and by running the ioslevel command, which should indicate that the ioslevel is now 2.2.3.4.
$ ioslevel
Applying updates from the CD/DVD drive
WARNING: If the target node to be updated is part of a redundant VIOS pair, ensure that the VIOS partner node is fully operational before beginning to update the target node. NOTE that for VIOS nodes that are part of an SSP cluster, the partner node must be shown in 'cluster -status ' output as having a cluster status of OK and a pool status of OK. If the target node is updated before its VIOS partner is fully operational, client LPARs may crash.
This Update Release can be burned onto a CD by using the ISO image file(s). After the CD has been created, follow these steps.
The current level of the VIOS must be 2.2.2.1 or later if you use Share Storage Pool.
Note: If shutdown –restart command failed, run swrole –PAdmin in order for padmin to set authorization and establish access to the shutdown command properly.
- Log in to the VIOS as the user padmin.
- If you use one or more File Backed Optical Media Repositories, you need to unload media images before you apply the Update Release. See details here.
- If you use Shared Storage Pools, then Shared Storage Pool Services must be stopped.
# clstartstop -stop -n <cluster_name > -m <hostname >
- Place the CD-ROM into the drive assigned to VIOS.
- Commit previous updates, by running the updateios command.
$ updateios -commit
- Apply the update by running the following update command.
$ updateios -accept -install -dev /dev/cdX
where X is the device number 0-N assigned to VIOS. - To load all changes, reboot the VIOS as user padmin .
$ shutdown -restart
- If cluster services were stopped in step 3, restart cluster services.
$ clstartstop -start -n <cluster_name > -m <hostname >
- Verify that the update was successful by checking the results of the updateios command and by running the ioslevel command, which should indicate that the ioslevel is now 2.2.3.4.
$ ioslevel
Performing the necessary tasks after installation
Checking for an incomplete installation caused by a loaded media repository
After installing an Update Release, you can use this method to determine if you have encountered the problem of a loaded media library.
Check the Media Repository by running this command:
$ lsrep
If the command reports: "Unable to retrieve repository date due to incomplete repository structure," then you have likely encountered this problem during the installation. The media images have not been lost and are still present in the file system of the virtual media library.
Running the lsvopt command should show the media images.
Recovering from an incomplete installation caused by a loaded media repository
To recover from this type of installation failure, unload any media repository images, and then reinstall the ios.cli.rte package. Follow these steps:
- Unload any media images
$ unloadopt -vtd <file-backed_virtual_optical_device>
- Reinstall the ios.cli.rte fileset by running the following commands.
To escape the restricted shell:
$ oem_setup_env
To install the failed fileset:
# installp –Or –agX ios.cli.rte –d <device/directory >
To return to the restricted shell:
# exit - Restart the VIOS.
$ shutdown –restart
- Verify that the Media Repository is operational by running this command:
$ lsrep
Additional information
NIM backup, install, and update information
Use of NIM to back up, install, and update the VIOS is supported.
For further assistance on the back up and install using NIM, refer to the NIM documentation.
Note : For install, always create the SPOT resource directly from the VIOS mksysb image. Do NOT update the SPOT from an LPP_SOURCE.
Use of NIM to update the VIOS is supported as follows:
Ensure that the NIM Master is at the appropriate level to support the VIOS image. Refer to the above table in the "Package information" section.
On the NIM Master, use the operation updateios to update the VIOS Server.
Sample: "nim –o updateios –a lpp_source=lpp_source1 ... ... ... "
For further assistance, refer to the NIM documentation.
On the NIM Master, use the operation alt_disk_install to update an alternate disk copy of the VIOS Server.
Sample:
"nim –o alt_disk_install –a source=rootvg –a disk=target_disk
–a fix_bundle=(Value) ... ... ... "
For further assistance, refer to the NIM documentation.
If NIM is not used to update the VIOS, only the updateios or the alt_root_vg command from the padmin shell can be used to update the VIOS.
Installing the latest version of Tivoli TSM
This release of VIOS contains several enhancements. These enhancements are in the area of POWER virtualization. The following list provides the features of each element by product area.
Note: Version 6.1.0, the previous version of Tivoli TSM, is still shipped and installed from the VIOS installation DVD.
Tivoli TSM version 6.2.2
The Tivoli TSM filesets are now being shipped on the VIOS Expansion Pack, with the required GSKit8 libraries.
The following are sample installation instructions for the new Tivoli TSM filesets:
$ updateios -fs tivoli.tsm.client.api.32bit -dev /dev/cd0
NOTE: Any prerequisite filesets will be pulled in from the Expansion DVD, including for TSM the GSKit8.gskcrypt fileset.
- Insert the VIOS Expansion DVD into the DVD drive that is assigned to the VIOS partition.
- List Contents of the VIOS Expansion DVD.
$ updateios -list -dev /dev/cd0
Fileset Name
GSKit8.gskcrypt32.ppc.rte 8.0.14.7
GSKit8.gskcrypt64.ppc.rte 8.0.14.7
GSKit8.gskssl32.ppc.rte 8.0.14.7
GSKit8.gskssl64.ppc.rte 8.0.14.7
..
tivoli.tsm.client.api.32bit 6.2.2.0 tivoli.tsm.client.api.64bit 6.2.2.0
.. - Install Tivoli TSM filesets.
- If needed, install additional TSM filesets.
$ updateios -fs tivoli.tsm.client.ba.32bit -dev /dev/cd0
- Verify TSM installed by listing software installed.
$ lssw
Sample output:
..
tivoli.tsm.client.api.32bit 6.2.2.0 CF TSM Client - Application Programming
Interface
Fixes included in this release
APAR | Description |
---|---|
IV52705 | -migrate -auto flags from HMC doesn't work for link agg device |
IV52962 | CAA MKCLUSTER FAILS IF REPOSITORY DISK IS EMC CLARIION/INVISTA |
IV52970 | ethchan_config command will exit with 0 even if command failed |
IV52991 | Incorrect freespace, capacity reported for a recently created VG |
IV52995 | ETHER_ATON DOES NOT VALIDATE THE INPUT AS EXPECTED |
IV53369 | ERRPT ETHERCHANNEL RECOVERY MESSAGE MISSING |
IV53373 | Inventory for media repository files may fail due to long name |
IV53376 | PROBLEMS SVC LUN LEADS TO LONG FAIL TIMES, POSSIBLE SSP |
IV53379 | Possible ioctl hang when S/N changes on opened disk |
IV53381 | HMC CRASHES OR OUTPUTS "CHILD PROCESS RETURNED ERROR" ON THE |
IV53396 | topas -G exits without showing any cluster statistics |
IV53634 | ADD OPTION TO REMOVE HERALD ATTRIBUTE WITH LOGINMSG COMMAND |
IV53635 | procfiles and proccred shows negative value for uid and gid |
IV53636 | lsuser ALL option along with usernames doesn't throw error |
IV53640 | getaddrinfo cannot handle IPv6 scope/zone |
IV53654 | Diags may pass even if a failure is present |
IV54203 | System crash during high load causing negative runq load value |
IV54284 | NPIV IO request times out in protocol driver. |
IV54317 | AUDIT START FAILS SETTING ROLE BASED AUDIT CLASSES |
IV54354 | Diagnostics does not run on 10Gb and 1Gb FCoE SR, LR, Cu adapter |
IV54415 | Unable to connect to DB after installing SP1 |
IV54420 | CAA: CLCOMD HANGS WHEN CLUSTER SECURITY CONFIGURED |
IV54512 | AIX crashed during rmdev of fcs# port |
IV54616 | EEH error after diagnostics is run on 4port 1Gb adapter |
IV54775 | Cluster creation succeeds with errors |
IV54776 | LPM validation of client using NPIV will fail when VIOS |
IV54836 | Update disk ping driver to indicate when node info is stale |
IV54918 | Network adapter configuration attributes don't update |
IV54989 | runque value in topasout -P has to be an average value |
IV54996 | PAM_AUTH ALLOWS USER LOGIN WITHOUT ASKING PASSWORD IT'S NOT NULL |
IV55092 | VIO_DAEMON: ERR UNSUPPORTED ADDRESS FAMILY : 0 |
IV55188 | csm.ivm.server fileset missing |
IV55263 | mkvdev display wrong message if the Disk is not set max_transfer |
IV55358 | USR PART REMOVAL FAILURE FOR IOS.CLI.RTE |
IV55386 | SYSTEM MAY HANG WITH THREADS IN SECOW FUNCTION |
IV55442 | add/remove failure group fails even when successful |
IV55488 | fix poold recovery after fatal error in node state change |
IV55489 | rm any older pool disk cache service registration on kext init |
IV55490 | xtRead correctly handle non-zero level failures |
IV55547 | no bridge status in entstat for virtual Ethernet |
IV55567 | POOL DRIVER ISN'T PINNED AND THUS SYSTEM MAY CRASH |
IV55569 | Crash at dpuPushSpare+000050 |
IV55603 | DSI in sea_output |
IV55632 | DSI crash in storfwork:sfwdAddDisk |
IV55700 | SHUTDOWN -FR FAILS TO COMPLETE A REBOOT OF THE SYSTEM |
IV55762 | Join node post phase failure after the node was rebooted |
IV55763 | VIOS SSP crash in storage pool data range lock path |
IV55764 | clstartstop command fails with execute permissions |
IV55805 | IOSCLI COMMAND MAY RETURN INVALID RESULT |
IV55807 | AIXPERT MUST DIFFERENTIATE FAILEDRULES FROM NOTAPPLICABLERULES |
IV55808 | system hangs when filesystem and log are degraded |
IV55880 | Potential hotadapter hot swap failure |
IV55883 | HEA CAN LOG ERRORS IF JUMBO PACKET IS RECEIVED |
IV55884 | DSI IN VLAN_GET_USER |
IV55952 | MISLEADING POOL ID REPORTED IN ERRPT ENTRY FOR VIO_ALERT_EVENT |
IV55961 | lsgroup command caches user information when it should not be |
IV55995 | INCORRECT DESCRIPTION IN ERRPT DURING SEA FAILOVER |
IV56013 | ioscli chdev -udid |
IV56019 | OUT-OF-ORDER FROM SENDER WHEN THERE IS HEAVY BIDIRECTIONAL TRAFF |
IV56021 | TCP REQUEST/RESPONSE STYLE COMMUNICATION MAY HANG AFTER ACK PKT |
IV56023 | NETWORK PERFORMANCE DEGRADED WHEN SACK IS ENABLED |
IV56058 | DEADLOCK BETWEEN ELXENT_ENTER_LIMBO AND ELXENT_OUTPUT |
IV56059 | SSP commands fail during rolling upgrade |
IV56124 | OPTION -T OF RC.POWERFAIL DOES NOT WORK |
IV56145 | rmvdev -sea removes device even if an interface is configured |
IV56147 | TARGET DROPS TARGET FUNCTIONS: NO FUTURE LOGINS WILL BE ALLOWED |
IV56149 | concurrent LPMs operation fail with EIO error |
IV56213 | CVE-2013-5211:THE MONLIST FEATURE IN NTPD ALLOWS REMOTE ATTACKS |
IV56389 | xmtopasagg dies while topasrec/topas -C running |
IV56392 | DSI CRASH IN SCSIDISKPIN:POFCMDPROC |
IV56457 | aixpert undo fails to restore suid bits |
IV56464 | SYSTEM DUMP TO SECONDARY DUMP DEVICE CAN FAIL |
IV56721 | PADMIN CAN'T RUN COMMANDS AFTER TOPASREC STOP AND VIOS UPGRADE |
IV56784 | LSGCL COMMAND DOESN'T DISPLAY VIRTUAL OPTICAL LIBRARY COMMANDS |
IV56801 | REDUCEVG: THE DEFAULT STORAGE POOL "ROOTVG" CAN NOT BE REMOVED |
IV56892 | CRASH IN HD_BEGIN WITH SYNCVG -F -P OF STRIPED LV |
IV57304 | Fix system crash risk on rmdev of PCIe3 SAS adapter |
IV57307 | caa does not prevent adding an existing node to the cluster |
IV57347 | LSPV -CLUSTERNAME |
IV57411 | SYSTEM HANG IN PGSIGNAL |
IV57458 | MIG_VSCI CORE DUMPING WHEN UDID HAS % CHARACTER |
IV57459 | Potential security issue. |
IV58093 | AIXPERT RULE "HLS_DISRMTDMNS" MAY FAIL WHEN TCB IS ENABLED |
IV58095 | Disk VPD is missing the serial number |
IV58096 | SYSTEM CRASH IN IMARK |
IV58466 | ifconfig enX fails on 1Gports when speed is set to 1GFullDuplex |
IV58470 | NETWORK PERFORMANCE DEGRADATION ON FC5899 ADAPTER |
IV58475 | Incorrect return code from Paging devices configuration |
IV58477 | Add node failure to the SSP cluster after Remove/Replace PV op |
IV58481 | SSP Import PV operation fails due to stale entry in the DB |
IV58489 | Blowfish LPA allows login with partial repeated string |
IV58765 | EEH ERROR CAUSES KERNEL PANIC IN NDD_USRREQ(). |
IV58766 | Potential security issue. |
IV58770 | CAA: "DEADMAN TIMER TRIGGERED" WITH SANCOMM AND SHUTDOWN OF NODE |
IV58830 | Ack of PPRC event fails with EINVAL unexpectedly |
IV58833 | Inactive VRM pages are not getting restored |
IV58834 | Failure of pool I/O operation |
IV59043 | UNICAST MODE TCPSOCK CONNECTIONS BOUNCING UP/DOWN |
IV59100 | JFS2 FS MARKED CORRUPT WHEN USING FIND COMMAND NEEDING FSCK RUN |
IV59112 | rmdisk failure when clone of clone has snapshot |
IV59113 | rmdisk hangs when file with clone is being modified |
IV59114 | Console errors after removing snapshot after rmdisk |
IV59115 | Conole log entries after removing snapshot after rmdisk |
IV59116 | DSI fileMigrateLogicalExtent during rmdisk |
IV59158 | SSP 'pv -replace' failure may prevent future 'pv -replace' |
IV59209 | LOSS OF REPOS AND REBOOT IN UNICAST CLUSTER CAUSES DMS TIMEOUT |
IV59495 | SEA creation fails with xml error |
IV59529 | Certain VIOS commands may hang when run through DRM |
IV59735 | resume of suspended LPAR may fail |
IV59968 | USYSIDENT DIDN'T REPORT THE CORRECT DISK STATUS |
IV60012 | PRIMARY SEA FAILS TO SEND RARP UPDATES IN CERTAIN CASES |
IV60384 | Data inconsistency detected in SSP |
List of fixes in 2.2.3.4
APAR | Description |
---|---|
IV59934 | Failure to do Secure LPM Migration with Tunnels |
IV60009 | Crash with Illegal Trap Intruction Interrupt on entcore |
IV60010 | Resume of client will fail when the reserve is set to PR_SHARED |
IV60199 | LPM while using SSP occasionally fails |
IV60297 | largesend turned on errorneously when ipsec encapsulation is on |
IV60299 | Potential security issue. |
IV60300 | Secure LPM may not create tunnels and may cause LPM to abort |
IV60363 | EEH permanent error seen on new P8 HW |
IV60499 | VIOS hang during host side port bounce |
IV60853 | SSP cluster hangs when concurrent listings are generated |
IV60904 | FCA_ERR6 THAT DECODES TO NO FREE CMD MIGHT HANG SOME PROCESSES |
IV60982 | perfprovider might cause performance impact on VIOS nodes of |
IV61136 | Migrate interface to first device in link agg when removing |
Was this topic helpful?
Document Information
Modified date:
19 February 2022
UID
hpc1vios20105f35