Fix Readme
Abstract
Downloads for Workload Partition Manager for AIX
Content
IBM PowerVM Virtual I/O Server
Contents
This Readme contains installation and other information about VIOS Update Release 2.2.4.10
Package information
PACKAGE: Update Release 2.2.4.10
IOSLEVEL: 2.2.4.10
VIOS level is | NIM Master level must be equal to or higher than |
---|---|
Update Release 2.2.4.10 | AIX 6100-09-06 or AIX 7100-04-01 |
In June 2015, VIOS introduces the minipack as a new service stream delivery vehicle as well as a change to the VIOS fix level numbering scheme. The VIOS "fix level" (the 4th number) has changed to two digits. For example, VIOS 2.2.4.1 has been changed to VIOS 2.2.4.10. Please refer to the VIOS Maintenance Strategy here for more details regarding the change to the VIOS release numbering scheme.
General package notes
Review the list of fixes included in Update Release 2.2.4.10.
To take full advantage of all the function available in the VIOS, it may be necessary to be at the latest system firmware level. If a system firmware update is necessary, it is recommended that the firmware be updated before you update the VIOS to Update Release 2.2.4.10.
Microcode or system firmware downloads for Power Systems
The VIOS Update Release 2.2.4.10 includes the IVM code, but it will not be enabled on HMC-managed systems. Update Release 2.2.4.10, like all VIOS Update Releases, can be applied to either HMC-managed or IVM-managed VIOS.
Update Release 2.2.4.10 updates your VIOS partition to ioslevel 2.2.4.10. To determine if Update Release 2.2.4.10 is already installed, run the following command from the VIOS command line:
$ ioslevel
If Update Release 2.2.4.10 is installed, the command output is 2.2.4.10.
Known Capabilities and Limitations
The following requirements and limitations apply to Shared Storage Pool (SSP) features and any associated virtual storage enhancements.
Requirements for Shared Storage Pool
- Platforms: POWER6 and later (includes Blades), IBM PureFlex Systems (Power Compute Nodes only)
- System requirements per SSP node:
- Minimum CPU: 1 CPU of guaranteed entitlement
- Minimum memory: 4GB
- Storage requirements per SSP cluster (minimum): 1 fiber-channel attached disk for repository, 1 GB
- At least 1 fiber-channel attached disk for data, 10GB
Limitations for Shared Storage Pool
Software Installation
- All VIOS nodes must be at version 2.2.1.3 or later.
- When installing updates for VIOS Update Release 2.2.4.10 participating in a Shared Storage Pool, the Shared Storage Pool Services must be stopped on the node being upgraded.
- In order to take advantage of the new SSP features in 2.2.4.00 (including improvements in the min/max levels), all nodes in the SSP cluster must be at 2.2.4.00.
Feature | Min | Max |
---|---|---|
Number of VIOS Nodes in Cluster | 1 | 16 |
Number of Physical Disks in Pool | 1 | 1024 |
Number of Virtual Disks (LUs) Mappings in Pool | 1 | 8192 |
Number of Client LPARs per VIOS node | 1 | 200 |
Capacity of Physical Disks in Pool | 10GB | 16TB |
Storage Capacity of Storage Pool | 10GB | 512TB |
Capacity of a Virtual Disk (LU) in Pool | 1GB | 4TB |
Number of Repository Disks | 1 | 1 |
Capacity of Repository Disk | 512MB | 1016GB |
- Maximum number of physical volumes that can be added to or replaced from a pool at one time: 64
- The Shared Storage Pool cluster name must be less than 63 characters long.
- The Shared Storage Pool pool name must be less than 127 characters long.
- The supported LU size is 4TB. However, it is recommended to limit the size of individual luns to 16 GB for optimal performance in cases where all of the following conditions are met:
- The server generates a random access pattern for the I/O device.
- There are more than 8 processes concurrently performing I/O.
- The performance of the application is dependent on the I/O subsystem throughput.
Network Configuration
- Uninterrupted network connectivity is required for operation. i.e. The network interface used for Shared Storage Pool configuration must be on a highly reliable network which is not congested.
- A Shared Storage Pool configuration can use IPV4 or IPV6, but not a combination of both.
- A Shared Storage Pool configuration should configure the TCP/IP resolver routine for name resolution to resolve host names locally first, and then use the DNS. For step by step instructions, refer to the TCP/IP name resolution documentation in the IBM Knowledge Center.
- The forward and reverse lookup should resolve to the IP address/hostname that is used for Shared Storage Pool configuration.
- It is recommended that the VIOSs that are part of the Shared Storage Pool configuration keep their clocks synchronized.
Storage Configuration
- Virtual SCSI devices provisioned from the Shared Storage Pool may drive higher CPU utilization than the classic Virtual SCSI devices.
- Using SCSI reservations (SCSI Reserve/Release and SCSI-3 Reserve) for fencing physical disks in the Shared Storage pool is not supported.
- SANCOM will not be supported in a Shared Storage Pool environment.
Shared Storage Pool capabilities and limitations
- On the client LPAR Virtual SCSI disk is the only peripheral device type supported by SSP at this time.
- When creating Virtual SCSI Adapters for VIOS LPARs, the option "Any client partition can connect" is not supported.
- VIOSs configured for SSP require that Shared Ethernet Adapter(s) (SEA) be setup for Threaded mode (the default mode). SEA in Interrupt Mode is not supported within SSP.
- VIOSs configured for SSP can be used as a Paging Space Partition (PSP), but the storage for the PSP paging spaces must come from logical devices not within a Shared Storage Pool. Using a VIOS SSP logical unit (LU) as an Active Memory Sharing (AMS) paging space or as the suspend/resume file is not supported.
Installation information
Pre-installation information and instructions
Please ensure that your rootvg contains at least 30GB and that there is at least 4GB free space before you attempt to update to Update Release 2.2.4.10. Run the lsvg rootvg command, and then ensure there is enough free space.
Example:
$ lsvg rootvg VOLUME GROUP: rootvg VG IDENTIFIER: 00f6004600004c000000014306a3db3d VG STATE: active PP SIZE: 64 megabyte(s) VG PERMISSION: read/write TOTAL PPs: 511 (32704 megabytes) MAX LVs: 256 FREE PPs: 64 (4096 megabytes) LVs: 14 USED PPs: 447 (28608 megabytes) OPEN LVs: 12 QUORUM: 2 (Enabled) TOTAL PVs: 1 VG DESCRIPTORS: 2 STALE PVs: 0 STALE PPs: 0 ACTIVE PVs: 1 AUTO ON: yes MAX PPs per VG: 32512 MAX PPs per PV: 1016 MAX PVs: 32 LTG size (Dynamic): 256 kilobyte(s) AUTO SYNC: no HOT SPARE: no BB POLICY: relocatable PV RESTRICTION: none INFINITE RETRY: no
Upgrading from VIOS version lower than 2.1.0
If you are planning to update a version of VIOS lower than 2.1, you must first migrate your VIOS to VIOS version 2.1.0 using the Migration DVD. After the VIOS is at version 2.1.0, the Update/Fixpack 2.2.4.10 must be applied to bring the VIOS to the latest Fix Pack VIOS 2.2.4.10 level.
Note that with this Update Release 2.2.4.10, a single boot alternative to this multiple step process is available to NIM users. NIM users can update by creating a single, merged lpp_source that combines the contents of the Migration DVD with the contents of this Update Release 2.2.4.10.
A single, merged lpp_source is not supported for VIOS that uses SDDPCM. However, if you use SDDPCM, you can still enable a single boot update by using the alternate method described at the following location:
SDD and SDDPCM migration procedures when migrating VIOS from version 1.x to version 2.x
After the VIOS migration is complete, from 1.X to 2.X, you must set the Processor Folding, described here under "Migration DVD":
Instructions to disable Processor Folding are detailed at the link below.
See the "Migration DVD" section:
Virtual I/O Server support for Power Systems
Upgrading from VIOS version 2.1.0 and above
The current level of VIOS must be between 2.2.1.1 and 2.2.3.x, you can put 2.2.4.10 in a location and do the update using updateios command.
Before installing the VIOS Update Release 2.2.4.10
The update could fail if there is a loaded media repository.
Checking for a loaded media repository
To check for a loaded media repository, and then unload it, follow these steps.
- To check for loaded images, run the following command:
$ lsvopt
The Media column lists any loaded media. - To unload media images, run the following commands on all Virtual Target Devices that have loaded images.
$ unloadopt -vtd <file-backed_virtual_optical_device >
- To verify that all media are unloaded, run the following command again.
$ lsvopt
The command output should show No Media for all VTDs.
Migrate Shared Storage Pool Configuration
The Virtual I/O Server (VIOS) Version 2.2.2.1 or later, supports rolling updates for SSP clusters. The VIOS can be updated to Update Release 2.2.4.10 using rolling updates.
If your current VIOS is running with Shared Storage Pool from 2.2.1.1 or 2.2.1.3, the following information applies:
A cluster that is created and configured on VIOS Version 2.2.1.1 or 2.2.1.3 must be migrated to version 2.2.1.4 or 2.2.1.5 prior to utilizing rolling updates. This allows the user to keep their shared storage pool devices. When VIOS version is equal or greater than 2.2.1.4 and less than 2.2.4.10, the user needs to download 2.2.4.10 update images into a directory, then update the VIOS to Update Release 2.2.4.10 using rolling updates.
If your current VIOS is configured with Shared Storage Pool from 2.2.1.4 or later, the following information applies:
The rolling updates enhancement allows the user to apply Update Release 2.2.4.10 to the VIOS logical partitions in the cluster individually without causing an outage in the entire cluster. The updated VIOS logical partitions cannot use the new SSP capabilities until all VIOS logical partitions in the cluster are updated.
To upgrade the VIOS logical partitions to use the new SSP capabilities, ensure that the following conditions are met:
- All VIOS logical partitions must have VIOS Update Release version 2.2.1.4 or later installed. After the update, you can verify that the logical partitions have the new level of software installed by typing the cluster -status -verbose command from the VIOS command line. In the Node Upgrade Status field, if the status of the VIOS logical partition is displayed as UP_LEVEL , the software level in the logical partition is higher than the software level in the cluster. If the status is displayed as ON_LEVEL , the software level in the logical partition and the cluster is the same.
- All VIOS logical partitions must be running. If any VIOS logical partition in the cluster is not running, the cluster cannot be upgraded to use the new SSP capabilities.
The VIOS SSP software monitors node status and will automatically upgrade the cluster to make use of the new capabilities when all the nodes in the cluster have been updated, "cluster -status -verbose" reports "ON_LEVEL" .
Installing the Update Release
There is now a method to verify the VIOS update files before installation. This process requires access to openssl by the 'padmin' User, which can be accomplished by creating a link.
To verify the VIOS update files, follow these steps:
$ oem_setup_env
Create a link to openssl
# ln -s /usr/bin/openssl /usr/ios/utils/openssl
Verify the link to openssl was created
# ls -alL /usr/bin/openssl /usr/ios/utils/openssl
Both files should display similar owner and size
# exit
Use one of the following methods to install the latest VIOS Service Release. As with all maintenance, you should create a VIOS backup before making changes.
If you are running a Shared Storage Pool configuration, you must follow the steps in Migrate Shared Storage Pool Configuration.
Note : While running 'updateios' in the following steps, you may see accessauth messages, but these messages can safely be ignored.
If your current level is between 2.2.1.1 and 2.2.2.1, you can directly apply 2.2.4.10 updates. This fixes an update problem with the builddate on bos.alt_disk_install.boot_images fileset.
If your current level is 2.2.2.1, 2.2.2.2, 2.2.2.3, or 2.2.3.1, you need to run updateios command twice to get bos.alt_disk_install.boot_images fileset update problem fixed.
Run the following command after the step of "$ updateios –accept –install –dev <directory_name >" completes.
$ updateios –accept –dev <directory_name >
Depending on the VIOs level one or more of th eLPPs below may be reported as "Missing Requisites", and they may be ignored.
MISSING REQUISITES:
X11.loc.fr_FR.base.lib 4.3.0.0 # Base Level Fileset bos.INed 6.1.6.0 # Base Level Fileset bos.loc.pc.Ja_JP 6.1.0.0 # Base Level Fileset bos.loc.utf.EN_US 6.1.0.0 # Base Level Fileset bos.mls.rte 6.1.x.x # Base Level Fileset
Applying updates
WARNING: If the target node to be updated is part of a redundant VIOS pair, ensure that the VIOS partner node is fully operational before beginning to update the target node. NOTE that for VIOS nodes that are part of an SSP cluster, the partner node must be shown in 'cluster -status ' output as having a cluster status of OK and a pool status of OK. If the target node is updated before its VIOS partner is fully operational, client LPARs may crash.
The current level of the VIOS must be 2.2.2.1 or later if you use the Share Storage Pool .
- Log in to the VIOS as the user padmin.
- If you use one or more File Backed Optical Media Repositories, you need to unload media images before you apply the Update Release. See details here.
- If you use Shared Storage Pools, then Shared Storage Pool Services must be stopped.
$ clstartstop -stop -n <cluster_name > -m <hostname >
- To apply updates from a directory on your local hard disk, follow the steps:
- Create a directory on the Virtual I/O Server.
$ mkdir <directory_name > - Using ftp, transfer the update file(s) to the directory you created.
To apply updates from a remotely mounted file system, and the remote file system is to be mounted read-only, follow the steps:
- Mount the remote directory onto the Virtual I/O Server:
$ mount remote_machine_name:directory /mnt
The update release can be burned onto a CD by using the ISO image file(s). To apply updates from the CD/DVD drive, follow the steps:
- Place the CD-ROM into the drive assigned to VIOS.
- Create a directory on the Virtual I/O Server.
- Commit previous updates by running the updateios command
$ updateios -commit
- Verify the updates files that were copied. This step can only be performed if the link to openssl was created.
$ cp <directory_path >/ck_sum.bff /home/padmin
$ chmod 755 </home/padmin>/ck_sum.bff
$ ck_sum.bff <directory_path >
If there are missing updates or incomplete downloads, an error message is displayed. - Apply the update by running the updateios command
$ updateios -accept -install -dev <directory_name >
- To load all changes, reboot the VIOS as user padmin .
$ shutdown -restart
Note: If shutdown –restart command failed, run swrole –PAdmin in order for padmin to set authorization and establish access to the shutdown command properly.
- If cluster services were stopped in step 3, restart cluster services.
$ clstartstop -start -n <cluster_name > -m <hostname >
- Verify that the update was successful by checking the results of the updateios command and by running the isolevel command, which should indicate that the ioslevel is now 2.2.4.10.
$ ioslevel
Performing the necessary tasks after installation
Checking for an incomplete installation caused by a loaded media repository
After installing an Update Release, you can use this method to determine if you have encountered the problem of a loaded media library.
Check the Media Repository by running this command:
$ lsrep
If the command reports: "Unable to retrieve repository date due to incomplete repository structure," then you have likely encountered this problem during the installation. The media images have not been lost and are still present in the file system of the virtual media library.
Running the lsvopt command should show the media images.
Recovering from an incomplete installation caused by a loaded media repository
To recover from this type of installation failure, unload any media repository images, and then reinstall the ios.cli.rte package. Follow these steps:
- Unload any media images
$ unloadopt -vtd <file-backed_virtual_optical_device>
- Reinstall the ios.cli.rte fileset by running the following commands.
To escape the restricted shell:
$ oem_setup_env
To install the failed fileset:
# installp –Or –agX ios.cli.rte –d <device/directory >
To return to the restricted shell:
# exit - Restart the VIOS.
$ shutdown –restart
- Verify that the Media Repository is operational by running this command:
$ lsrep
Additional information
NIM backup, install, and update information
Use of NIM to back up, install, and update the VIOS is supported.
For further assistance on the back up and install using NIM, refer to the NIM documentation.
Note : For install, always create the SPOT resource directly from the VIOS mksysb image. Do NOT update the SPOT from an LPP_SOURCE.
Use of NIM to update the VIOS is supported as follows:
Ensure that the NIM Master is at the appropriate level to support the VIOS image. Refer to the above table in the "Package information" section.
On the NIM Master, use the operation updateios to update the VIOS Server.
Sample: "nim –o updateios –a lpp_source=lpp_source1 ... ... ... "
For further assistance, refer to the NIM documentation.
On the NIM Master, use the operation alt_disk_install to update an alternate disk copy of the VIOS Server.
Sample:
"nim –o alt_disk_install –a source=rootvg –a disk=target_disk
–a fix_bundle=(Value) ... ... ... "
For further assistance, refer to the NIM documentation.
If NIM is not used to update the VIOS, only the updateios or the alt_root_vg command from the padmin shell can be used to update the VIOS.
Installing the latest version of Tivoli TSM
This release of VIOS contains several enhancements. These enhancements are in the area of POWER virtualization. The following list provides the features of each element by product area.
Note: Version 6.1.0, the previous version of Tivoli TSM, is still shipped and installed from the VIOS installation DVD.
Tivoli TSM version 6.2.2
The Tivoli TSM filesets are now being shipped on the VIOS Expansion Pack, with the required GSKit8 libraries.
The following are sample installation instructions for the new Tivoli TSM filesets:
$ updateios -fs tivoli.tsm.client.api.32bit -dev /dev/cd0
NOTE: Any prerequisite filesets will be pulled in from the Expansion DVD, including for TSM the GSKit8.gskcrypt fileset.
- Insert the VIOS Expansion DVD into the DVD drive that is assigned to the VIOS partition.
- List Contents of the VIOS Expansion DVD.
$ updateios -list -dev /dev/cd0
Fileset Name
GSKit8.gskcrypt32.ppc.rte 8.0.14.7
GSKit8.gskcrypt64.ppc.rte 8.0.14.7
GSKit8.gskssl32.ppc.rte 8.0.14.7
GSKit8.gskssl64.ppc.rte 8.0.14.7
..
tivoli.tsm.client.api.32bit 6.2.2.0 tivoli.tsm.client.api.64bit 6.2.2.0
.. - Install Tivoli TSM filesets.
- If needed, install additional TSM filesets.
$ updateios -fs tivoli.tsm.client.ba.32bit -dev /dev/cd0
- Verify TSM installed by listing software installed.
$ lssw
Sample output:
..
tivoli.tsm.client.api.32bit 6.2.2.0 CF TSM Client - Application Programming
Interface
Fixes included in this release
APAR | Description |
---|---|
IV70173 | LOCAL USER LOGIN FAILS WITH LDAP ERROR MESSAGE |
IV70475 | RESTORE ENCRYPTED FILESYSTEM CANNOT CREATE /VAR/ADM/RAS/BI.LOG |
IV70799 | RPC.MOUNTD COREDUMP WITH ILLEGAL INSTRUCTION |
IV70843 | AIXPERT MAY ADD WRITE PERMISSION TO PROGRAMS |
IV71011 | V3 NFS EXPORTED VERITAS FS CONSUMES EXCESSIVE CPU |
IV71031 | AIX NFSSTAT COMMAND DOES NOT REPORT ATTR CACHE TIMEOUT VALUES |
IV71034 | PRTCONF CAN NOT REPORT PROCESSOR TYPE IN NON-ENGLISH LANGUAGE |
IV71115 | KERNEL ABEND CRASH IN OBJECTSREMOVE |
IV71225 | CREATING LARGE RAMDISK TAKES A LONG TIME |
IV71257 | SAP UUID_CREATE() STALLS DUE TO NDD_LOCK SERIALIZATION |
IV71304 | DUPLICATE PRIMARY AND TPRIMARY DUMP DEVICES IN SWSERVAT ODM. |
IV71476 | CLOSING SOCKET WHILE 2ND THREAD IS POLLING CAUSES HANG. |
IV71481 | NIMADM ON CLIENTS WITH BOS.COMPAT.LINKS INSTALLED MISBEHAVES. |
IV71649 | CAA: MKCLUSTER FAILS TO CREATE /ETC/CLUSTER/LOCKS DIRECTORY |
IV71659 | SPURR_VERSION MIGHT BE RESET TO 0 WHEN MOVING TO SMT-8 ON BRAZOS |
IV71784 | DEVICES CONNECTED TO USB0 NOT USABLE ON CONSOLE LFT. |
IV71787 | LSCLUSTER DOES NOT RECOGNIZE AN EXISTING NODE NAME |
IV71806 | PART FAILS TO CREATE TAR FILE AFTER SETTING HIGH SECURITY LEVEL |
IV71857 | AIX NFS4 SERVER CRASHES IN SM4_VCM_PURGE_RIGHTS() |
IV71879 | NOAC SUPPORT IN MKNFSMNT AND CHNFSMNT |
IV71948 | MULTIBOS TAGS INCORRECT WHEN /ETC/FILESYSTEMS.CHROOT.SAV EXISTS |
IV72007 | INCORRECT READ MAY OCCUR DURING JOINVG |
IV72066 | CLUSTER0 MISSING LEADS TO MKCLUSTER FAILURE |
IV72122 | NO ERRORS SEEN WHEN DUPLICATE FLAGS IN "NO" COMMAND ARE USED |
IV72147 | CFGIPSEC NOT PROPERLY HANDLING COMMENTS IN NEXTBOOT FILE |
IV72379 | SYSTEM MAY CRASH DURING I/O TO STRIPED LV WITH INFINITE RETRY |
IV72499 | CORE DUMP AFTER FAILED DLOPEN IN 64-BIT PROGRAM |
IV72705 | FSCK HANGS WHILE REPAIRING ACL INODE |
IV72781 | IPSECSTAT DISPLAYING NEGATIVE VALUES |
IV72792 | IFUNIT() CRASH WHEN IOCTL CALLED WITH WRONG SIOCGSIZIFCONF VALUE |
IV72893 | CLCOMD CAN USE TOO MUCH CPU WHEN AHAFS NOT ACCESSIBLE |
IV73259 | SUPPORT FOR CRITICAL VG ATTRIBUTE ALT_DISK_COPY/ALT_DISK_INSTALL |
IV73415 | JFS2 FILESYSTEM DOES NOT INCREMENT PER-CPU IGET COUNTER |
IV73440 | SYSTEM CRASH IN DOUBLE FREE SCSIDISK_CONFIG |
IV73516 | CRASH DOUBLE FREE IN SCSIDISK_TERM_MPIO |
IV73528 | ARTEXGET IS SLOW WITH LARGE NUMBER OF COMMANDS TO EXECUTE |
IV73626 | CREATING AN EXTERNAL SNAPSHOT LARGER THAN 1 TB MAY FAIL |
IV73642 | LSDEV -TYPE ENT4SEA EXCLUDES ROCE ADAPTER |
IV73707 | AIX NFSV4 SERVER CRASHES WHEN POWERHA STARTS |
IV73758 | EXECUTING STOPSRC -G IKE CAN HANG TMD |
IV73768 | EXTENDVG OF CONC VG MAY FORCE THE VG OFFLINE ON REMOTE NODE |
IV73771 | A JFS2 FILE OPENED WITH O_SYNC MAY HAVE ZERO LENGTH AFTER REBOOT |
IV73910 | KSH RUNS OUT OF MEMORY WHILE ASSIGING FEW MB DATA TO A VARIABLE |
IV74173 | EXT PARAMETER NOT CORRECTLY DEALT WITH IN DIO |
IV74403 | FUSER/PROCFILES COMMAND MAY STUCK IN AN ENDLESS LOOP |
IV74417 | LKDEV -L HDISK -D MAY FAIL WITH 0514-518 CANNOT ACCESS THE CULK |
IV74540 | CRON AND AT JOBS NOT WORKING FOR DOMAINLESS USERS |
IV74606 | ALT_DISK_COPY ON 4K DISK TRIGGERS LVM_IO_FAIL |
IV74766 | JFS2 MAY CRASH WHILE SHRINKING |
IV75001 | MIGRATEPV FAILURE MAY CAUSE IO HANG IN CONCURRENT VG |
IV75013 | HANG WHEN TRACE AND DR FOR CPU ADD STARTED AT ONCE |
IV75015 | DSH SOMETIMES IGNORE LAST BLANK LINES AND SOMETIMES NOT |
IV75041 | MEMORY LEAKS RELATE TO GET_NIS_ENTRY ROUTINE |
IV75044 | EXPORTING LANG=JA_JP WITHIN SCRIPT DOES NOT EFFECT |
IV75059 | SYSTEM HUNG WITH PSMDS HOGGING CPUS WHEN VM_PVLIST_DOHARD=1 |
IV75073 | NIM -O UPDATEIOS COMMAND FAILS EVEN AFTER APPLYING IV60805 |
IV75266 | INVALID FILESYSTEM UTILIZATION EVENT WITH AHAFS MONITORING |
IV75273 | LSLDAP KEEPS ON FAILLING WHEN LDAP SERVER CLOSES CONNECTION |
IV75274 | MULTIBOS REPORTS SUCCESS EVEN IF NOT ALL FILES WERE COPIED |
IV75364 | CAA: UNEXPECTED START_NODE BY PEER NODE LEADS TO AST PANIC |
IV75387 | CAA: SYSTEM WILL CRASH IF GOSSIP PACKET HAS ID = 0 |
IV75495 | CODEGCHECK MIGHT FAIL WITH FALSE ERROR |
IV75518 | CAT, PIPE, GREP -L ISSUE |
IV75682 | SLOW MBUFS RELEASE CAUSES HIGH CPU |
IV75738 | LSSEC PERFORMANCE SUFFERS WHEN EFS IS ENABLED |
IV75896 | SYSTEM CRASH IN DBALLOCAG DUE TO ENOSPC |
IV76050 | THE UUID_CREATE SUBROUTINE MAY CREATE DUPLICATE UUIDS |
IV76151 | IKED LOOPS AND CAUSES CPU LOAD WHEN 0-BYTE DATAGRAM IS PRESENT |
IV76234 | SYMBOL RESOLUTION FAILURE WHEN PERFORMING A MULTIBOS UPDATE |
IV76252 | PASSWORD AGING DOESN'T WORK FOR LDAP IF LDAP USES LOCAL DEFAULTS |
IV76409 | CANNOT UNMOUNT FILE SYSTEM AFTER USING SHARED SYMBOL TABLE. |
IV76503 | POSIX_SPAWN FUNCTIONS MAY CORE DUMP IN 64-BIT PROGRAM |
IV76508 | EFSENABLE -D WRT LDAP OPERATION FAILS IN SOME SCENARIOS |
IV76662 | GETIOPRI(PID_T) COULD ERRONEOUSLY RETURN -1 FOR NON-ROOT USERS |
IV76720 | USING CHDEF WITH INCORRECT SYNTAX DOES NOT DISPLAY USAGE ERROR |
IV76774 | SYSTEM CRASH IN PRGETPSINFO() |
IV76806 | LSGROUP INCORRECTLY CACHES LDAP USER INFORMATION |
IV76813 | JFS2 FILE SYSTEM MAY GET J2_IMAP_CORRUPT ERROR IN ERROR LOG |
IV77175 | CAA: MERGE FAILURE DUE TO STALE JOIN_PENDING FLAG. |
IV77529 | DISK NORMAL OPEN FAILS WITH EACCES |
IV77678 | THE MORE COMMAND MAY FAIL TO HANDLE LARGE FILES (>=4GB) |
IV77697 | CIO/DIO ENABLED FILESYSTEMS CAN CRASH THE SYSTEM WITH ISI_PROC |
IV78453 | IMPROVED DRIVER'S FAIRNESS ALGORITHM TO AVOID I/O STARVATION |
IV78454 | IMPROVED DRIVER'S FAIRNESS LOGIC TO AVOID I/O STARVATION |
List of fixes in 2.2.4.10
APAR | Description |
---|---|
IV78895 | NPIV Migration failed with Function npiv_phys_spt tried to |
IV78896 | Improve SSP DB access performance |
IV78897 | LU-level validation for LPM fails with IBM i or Linux clients |
IV78899 | System crash in entcore_link_change_nic_callback() |
IV78900 | Removing certain virtual devices incorrectly fails. |
IV78931 | Console errors for VLAN over SEA |
Was this topic helpful?
Document Information
Modified date:
19 February 2022
UID
hpc1vios6555705b