IBM Support

Readme and Release notes for release 5.1.0.19 LoadLeveler 5.1.0.19 LL_scheduler-5.1.0.19-power-AIX Readme

Fix Readme


Abstract

xxx

Content

Readme file for: LL_scheduler-5.1.0.19-power-AIX
Product/Component Release: 5.1.0.19
Update Name: LL_scheduler-5.1.0.19-power-AIX
Fix ID: LL_scheduler-5.1.0.19-power-AIX
Publication Date: 18 August 2015
Last modified date: 18 August 2015

Installation information

Download location

Below is a list of components, platforms, and file names that apply to this Readme file.

Fix Download for AIX

Product/Component Name: Platform: Fix:
LoadLeveler AIX 5.3
AIX 6.1
LL_scheduler-5.1.0.19-power-AIX

Prerequisites and co-requisites

None

Known limitations

  • - Known Limitations

    For LL 5.1.0:

    • LoadLeveler 5.1.0.4 is a mandatory update to provide corrective fixes for LoadLeveler v5.1.0 on Linux x86 systems.
    • LoadLeveler 5.1.0.6 is a mandatory update to provide corrective fixes for LoadLeveler v5.1.0 on Linux power systems.
    • If the scheduler and resource manager components on the same machine are not at the same level, the daemons will not start up.
    • Preemption cannot be done for jobs which use collective acceleration units (CAU) by specifying either the collective_groups LoadLeveler keyword or MP_COLLECTIVE_GROUPS environment variable. If jobs are using CAUs, the keyword PREEMPTION_SUPPORT = NONE (which is the default) has to be specified in the LoadLeveler configuration.

    For LL 5.1.0.3+:

    • When submitting a batch job that uses Collective Acceleration Unit (CAU) groups, the MP_COLLECTIVE_GROUPS environment variable must specify the number of collective groups to be used by the job.
    • If the PREEMPTION_SUPPORT keyword is set to full in the LoadLeveler configuration file:
      • The collective_groups keyword or MP_COLLECTIVE_GROUPS environment variable cannot be specified for preemptable jobs.
    • If the PREEMPTION_SUPPORT keyword is set to no_adapter in the LoadLeveler configuration file and the collective_groups keyword or the MP_COLLECTIVE_GROUPS environment variable is set, you must set the following environment variables for the job:
      • LAPI_DEBUG_COMM_TIMEOUT=yes
      • MP_DEBUG_COMM_TIMEOUT=yes

      For LL 5.1.0.7:

      • Do not install LL 5.1.0.7 service update if you are using or planning to use a database for the LoadLeveler configuration.

      For LL 5.1.0.12:

      • APAR IV26259 of Parallel Environment Runtime Environment (1.2.0.9 or higher) must be installed if you are using the checkpoint/restart function.

      For LL 5.1.0.13:

      • Do not install the LL 5.1.0.13 service update if you have PE Runtime Environment 1.1 installed.
      • Support for PE Runtime Environment 1.1 will be available with APAR IV33552.

      For LL 5.1.0.16:

      • LL 5.1.0.16 is only supported for Blue Gene/Q

Installation information

  • - Installation procedure

    Install the LoadLeveler updates on your system by using the normal, smit update_all command.

    For further information, consult the LoadLeveler Library for the appropriate version of the LoadLeveler AIX Installation Guide.

Additional information

  • - Package contents

    LoadL.scheduler.full.bff | 5.1.0.19
    LoadL.scheduler.so.bff | 5.1.0.19
    LoadL.scheduler.msg.en_US.bff | 5.1.0.19

  • - Changelog

    Notes

    Unless specifically noted otherwise, this history of problems fixed for LoadLeveler 5.1.0.x applies to:

    • LoadLeveler 5.1.0.x for Red Hat Enterprise Linux 6 (RHEL6) on servers with 64-bit Opteron or EM64T processors
    • LoadLeveler 5.1.0.x for SUSE LINUX Enterprise Server 11 (SLES11) on servers with 64-bit Opteron or EM64T processors
    • LoadLeveler 5.1.0.x for Red Hat Enterprise Linux 6 (RHEL6) on POWER servers
    • LoadLeveler 5.1.0.x for AIX 7

    Restriction section
    For LL 5.1.0:
    • If the scheduler and resource manager components on the same machine are not at the same level, the daemons will not start up.
    • Please refer to the "Known Limitations" section under the fix pack README for more limitation information for this release.


    Additional Information section
    For LL 5.1.0:
    • Please refer to the "Setting up control groups on a diskless (or stateless) cluster for preemption, process tracking, workload management (WLM), and checkpoint/restart" under the "Installation Information" section for more information on how to setup control groups.

    General LoadLeveler problems

    Problems fixed in LoadLeveler 5.1.0.19 [June 3, 2015]

    • Update for AIX ONLY
    • Fixed job is in stuck when master_node_exclusive configured.
    • When one node with multiple adapters has one adapter down, LoadLeveler will not dispatch step on the node.
    • When LoadL_starter log file failed to merge due to filesystem issue, an email will be sent to LoadLeveler administrator.
    • A timing window issue between do_find_machine() and do_add_machine() which causing LoadL_region_mgr core was fixed.
    • llrun command does not forwarding DISPLAY environment correctly, the issue was fixed.
    • Keywords in user sub-stanza will work correctly.
    • Fixed LoadLeveler does not recognize machine name correctly with some configures.
    • Resource Manager only:
      • Fixed llsummary issue which does not show complete information sometime.
      • LoadL_startd core when many jobs complete simultaneously was fixed.
      • Fixed LoadL_startd hang due to dead lock when enable accounting configuration.
      • Keyword RESUME_ON_SWITCH_TABLE_ERROR_CLEAR does not work, the bug was fixed.
      • When execute llctl -g reconfig, a timing issue which causing LoadL_startd core was fixed.
      • LOADL_HOSTFILE environment variable will not be deleted before epilog execution.
    • Scheduler only:
      • Preempting step remains idle forever when fail to dispatched was fixed.
      • Fixed the issue that more than one non_shared steps are allocated to the same node.
      • The preempted adapter window resource does not cleaned when the preempting step is re-scheduled, the bug was fixed.
      • LoadLeveler can support first_node_tasks job command file keyword as designed.

    Problems fixed in LoadLeveler 5.1.0.18 [May 8, 2014]

    • LoadLeveler startd coredump problem was fixed while attempting to save a log file if the open() fails for the .old file.
    • A bug in the routing code of class BgSwitch/BgCable was fixed.
    • Loadleveler was changed to not use cables, for passthrough, if a midplane has any nodeboards unavailable.
    • LoadLeveler can support the job command file with multiple runjob command lines contained.
    • LoadLeveler has been changed to correct the compatibility problem with release 5.1.0.15 and future service levels.
    • The LoadL_master daemon has been changed to correct the serialization issue, eliminating the case where resources may be unavailable because a manager daemon is not running.
    • LoadLeveler is changed to remove the code which attempts to transmit machine data which became obsolete.
    • API ll_get_data with type LL_StepBgSizeAllocated from history file return correct Bg Size Allocated value for sub-block jobs now.
    • Resource Manager only:
      • LoadLeveler Startd daemon is changed to remove a synchronization issue which delayed returning a job to the job queue for re-dispatch after the job was rejected.
      • The startd has been changed to clear the starter process id as soon as the startd detects that the starter processs has terminated.
      • The llq -w command will not cause the LoadL_startd daemon to terminate with a SEGV.
      • The LoadL_schedd daemons will not terminate with a SEGV while processing a command from a remote cluster.
      • The LoadLeveler algorithm for calulating cpu shares is changed to successfully create WLM classes for jobs requesting a cosumable cpu requirement of 66 or greater when the WLMSHARES policy is used to enforce CPU usage.
      • LoadLeveler can support the job command file to run multiple sub-block runjob command lines at the same time.
      • LoadLeveler is changed to set the positive OOM killer value for the slave task now. The OOM killer can successfully kill slave task in the non-IBM MPI environment.
    • Scheduler only:
      • LoadL_negotiator will not assign the resources from drained midplanes.
      • The block which is used to run sub-block steps, will not be freed when there are running steps on it.

    Problems fixed in LoadLeveler 5.1.0.17 [September 4, 2013]

    • The LoadLeveler code has been modified to record timestamps for all 4 configuration files in the shared memory and to compare each time stamp from the SHM buffer with the time stamp from the corresponding file time stamp to decide whether the shared memory needs to be refreshed.
    • Added energy capping support on Power Linux.
    • Added a check to the LlNetProcess::cmRecovery member function to bypass taking any action for non-daemon processes to avoid the llctl hang.
    • Shortened the length of suspend_control filed in table TLL_CFGCluster to avoid the row length limit.
    • Fixed the issue that the LoadL_startd daemon will take too long to discover an alternate region manager when re-starting an execute node (one running the LoadL_startd daemon) after a region manager failover takes place.
    • Skipped the nodes on which LoadL_Schedd failed to get energy consumption for energy consumption calculation.
    • Fixed that issue that energy consumption was incorrect when removing ibmaem module.
    • Removed the check of decreasing column size when updating DB from PTF16 to PTF17.
    • Fixed the issue that llstatus -l failed to print energy after reconfiguration.
    • Fixed the issue that the LoadL_negotiator crashed because of processing incomplete jobs in a send all jobs transaction.
    • Added README file to explain the restrictions regarding the PTF 14 incompatibility.
    • The LoadLeveler code has been modified not to generate the output file when LoadL_schedd can't access the energy output file directory.
    • Resource Manager only:
      • Added the necessary checking for NULL before attempting to reference the object pointer to avoid core dump of LoadL_Startd.
      • The serialization issue has been corrected to avoid the core dump of Resource Manager in high stress conditions.
      • The LoadLeveler code has been changed to set the LOADL_TOTAL_TASKS environment variable for LL jobs with job type of PARALLEL.
      • Changed the description of the -p option for llrstatus command.
      • Fixed the issue that LoadL_startd crashed because a single thread attempted to acquire the same UID lock twice.
    • Blue Gene:
      • Fixed the issue when BlueGene job completes, LoadLeveler may free the blocks before the runjob client processes exit.
      • Added the support for BlueGene API LiveModel::monitorBlockAllocate to receive the block deallocation event.
      • Added new catalog messages for BlueGene sub block support.
      • Fixed the issue that LoadL_negotiator crashed at deallocateBlockThread when shutting down LL.
      • Added the support BGQ co-schedule job.
      • Fixed the issue that LoadLeveler allocated blocks with wrong midplanes.
      • Fixed the issue that LoadLeveler tracked dual-use I/O link usage in a wrong way.

    Problems fixed in LoadLeveler 5.1.0.16 [July 30, 2013]

    • Update for Blue Gene/Q only
    • Remove the misleading error message caused by unthread_open.
    • The LoadLeveler code has been modified to ignore the pending flush or vacate when completing an interactive step.
    • Add support for GFlops in energy reports.
    • The problem that upgrading the LoadLeveler utility package failed has been fixed.
    • The problem that calling std::sort causing invalid object pointers leads to LoadL_negotiator crashed has been fixed.
    • The resource manager never discovers the current serving CM after a CM failover has been fixed.
    • The LoadLeveler code has been modified to avoid referencing a null pointer which leads to llsummary command core dump.
    • Resource Manager only:
      • The problem that LoadL_schedd failed to start if there are corrupted spool files has been fixed.
      • Add the required synchronization to the LoadL_startd to ensure the Max_Starters value does not get set incorrectly.
      • The problem that no events found in RAS log by killing LoadL_schedd has been fixed.
      • The problem that the aggregate adapter with no managed adapters leads to LoadL_startd core dump has been fixed.
    • Blue Gene:
      • A new configuration keyword enforce_bg_min_block_size is added. When the value is true, the I/O ratio will not have effect to the block size. When the value is false, the behavior is the same as before.
      • The problem that resources in a block which failed to be released are reused has been fixed.
      • The problem that llbgstatus query caused LoadL_negotiator core dump has been fixed.
      • The LoadLeveler code has been modified to add newly created block into hash table so that the coschedule job can be dispatched.
      • The problem that reservation for large block was not honored has been fixed.
      • Add support for Blue Gene sub-block jobs.
      • The problem that LoadLever didn't calculate IO links correctly has been fixed.
      • Remove the misleading message for the drained resources.
      • Remove the flooding messages when the reservation becomes active.
    • Scheduler only:
      • The problem that LoadL_negotiator managing lists of job steps that require floating resources leads to a core dump has been fixed.

    Problems fixed in LoadLeveler 5.1.0.15 [June 19, 2013]

    • Update for X86 LINUX (on May 20, 2013) and AIX (on June 19, 2013) ONLY
    • LoadLever is changed to ensure that whenever a resource manager daemon is started, it is notified of the active central manager.
    • The code path to write RAS records has been changed to avoid deadlock. The Schedd will no longer hang if there are a large number of jobs starting and terminating.
    • The S3 policy enhancement for the cluster level is added.
    • The LoadLeveler code has been modified to rename the free_list function so that the LoadL_negotiator will not terminate abnormally.
    • A new keyword which decides if gather Hardware Performance Monitor counters is added.
    • The problem that LoadL_schedd reported incorrect power value has been fixed.
    • The problem that llq printed incorrect Coschedule state for step has been fixed.
    • Resource Manager only:
      • LoadLeveler has been changed to ensure that accounting records for terminating events are transmitted and recorded in the LoadLeveler history file for all machines used to run a parallel job.
      • The resource manager crash problem after reconfig power policy for the machine has been fixed.
      • The mail sent to the LoadLeveler administrators when a switch table error occurs is modified to reference a current document which provides information on debugging switch table problems.
      • Reference counting of LoadLeveler job objects has been corrected in the resource manager.
      • The LoadLeveler LoadL_startd daemon is fixed to remove the synchronization defect between the job termination and job step verification threads. Jobs completing normally will not be vacated for this reason.
    • Scheduler only:
      • The central manager will now recognize and account for all resources for all machines added to a LoadLeveler cluster whether or not the machine is listed in the LoadLevler admin file when machine authentication is disabled.

    Problems fixed in LoadLeveler 5.1.0.14 [March 18, 2013]

    • Update for POWER LINUX ONLY
    • Fixed a dispatch problem of Intermittently jobs which are submitted after a llctl drain startd command.
    • Fixed the problem of llsummary command core dumps.
    • LoadLeveler is modified to ignore a failure of the mkdir system call if the directory already exists.
    • Resource Manager only:
      • Fixed issue of Startd daemon slow startup
    • Blue Gene:
      • The class job count will be decremented if a bluegene job fails because of a failed block boot, then subsequent jobs can be scheduled with correct class slots value.
      • When some nodeboard is not available, LL will not add the whole midplane in the block for some step.
      • One new keyword value, loadl, for bg_cache_blocks has been added. when bg_cache_blocks = loadl, the initialized static block will not be reused by LoadLeveler if it is not requried explicitly, and the static block will be freed after use.
      • When some cable is not available for job step, LoadLeveler will show both ends of the cable in llq -s command.
    • Scheduler only:
      • LoadLeveler has been changed to keep rejected step away from the reject machine in scheduler.
      • Performance improvement to the scheduling of work by the central manager.
      • When LoadL Negotiator dispatch one co-scheduled steps job, if someone step is failed to dispatched, LoadL Negotiator will not abort.
      • The central manager will not core dump when removing unusable RunClassRec objects successfully.

    Problems fixed in LoadLeveler 5.1.0.14 [March 15, 2013]

    • Update for X86 LINUX ONLY
    • Fixed a dispatch problem of Intermittently jobs which are submitted after a llctl drain startd command.
    • Fixed the problem of llsummary command core dumps.
    • LoadLeveler is modified to ignore a failure of the mkdir system call if the directory already exists.
    • Resource Manager only:
      • Fixed issue of Startd daemon slow startup
    • Scheduler only:
      • LoadLeveler has been changed to keep rejected step away from the reject machine in scheduler.
      • Performance improvement to the scheduling of work by the central manager.
      • When LoadL Negotiator dispatch one co-scheduled steps job, if someone step is failed to dispatched, LoadL Negotiator will not abort.
      • The central manager will not core dump when removing unusable RunClassRec objects successfully.

    Problems fixed in LoadLeveler 5.1.0.13 [March 11, 2013]

    • Update for LINUX POWER ONLY
    • Corrected an issue causing the LoadL_negotiator daemon to stall for several minutes at a time.
    • Refresh of the man pages
    • Fixed an issue where the consumablememory setting of a node was being set to 0.
    • Fixed a problem the Negotiator code dumping after a reconfig
    • The import of environment variables containing semicolons has been corrected
    • A new LoadLeveler job command file keyword first_node_tasks is added.
    • Blue Gene:
      • The BlueGene block holding the nodeboards which are in software error state will be freed after the job is terminated/completed. The nodeboards can be used for future scheduling.
    • Resource Manager only:
      • The Region Manager will no longer exit when the dgram port is in use.
      • Fixed issue of LOADL_PROCESSOR_LIST not being set correct for serial jobs.
      • The startd daemon will not crash when the value of keyword power_management_policy is reconfigured.
    • Scheduler only:
      • Fixed issue of scheduler getting into especially long dispatching cycles.
      • Performance improvement to the scheduling of work by the central manager.

    Problems fixed in LoadLeveler 5.1.0.14 [March 4, 2013]

    • Update for AIX ONLY
    • Fixed a dispatch problem of Intermittently jobs which are submitted after a llctl drain startd command.
    • Fixed the problem of llsummary command core dumps.
    • LoadLeveler is modified to ignore a failure of the mkdir system call if the directory already exists.
    • Resource Manager only:
      • Fixed issue of Startd daemon slow startup
    • Scheduler only:
      • LoadLeveler has been changed to keep rejected step away from the reject machine in scheduler.
      • Performance improvement to the scheduling of work by the central manager.
      • When LoadL Negotiator dispatch one co-scheduled steps job, if someone step is failed to dispatched, LoadL Negotiator will not abort.
      • The central manager will not core dump when removing unusable RunClassRec objects successfully.

    Problems fixed in LoadLeveler 5.1.0.13 [December 10, 2012]

    • Update for AIX ONLY
    • Corrected an issue causing the LoadL_negotiator daemon to stall for several minutes at a time.
    • Refresh of the man pages
    • Performance improvement for termination of interactive jobs.
    • Fixed an issue where the consumablememory setting of a node was being set to 0.
    • Fixed a problem the Negotiator code dumping after a reconfig
    • The import of environment variables containing semicolons has been corrected
    • Resource Manager only:
      • The Region Manager will no longer exit when the dgram port is in use.
      • Fixed issue of LOADL_PROCESSOR_LIST not being set correct for serial jobs.
    • Scheduler only:
      • Fixed issue of scheduler getting into especially long dispatching cycles.
      • Performance improvement to the scheduling of work by the central manager.

    Problems fixed in LoadLeveler 5.1.0.12 [October 12, 2012]

    • LoadLeveler now shows correct value of ConsumableCpus when machine group is configured.
    • The LoadLeveler job query commands will now return the correct "Step Cpus" value for the running job that requires ConsumableCpus in the node_resources keyword.
    • The central manager daemon will not core dump when attempting to use the VerifyJobs transaction to contact thousands of LoadLeveler stard daemons.
    • The LoadL_configurator daemon will not crash when the node tries to get the configuration data from the config hosts.
    • A core dump problem when running command llstatus -L machine has been fixed.
    • Resource Manager only:
      • The handling of hierarchical communication errors is restored to the prior release behavior.
      • LoadLeveler Startd daemon will ensure that the cpu map files are created before terminating a checkpointing job.
      • The LOADL_HOSTFILE environment variable will be set in the environment of the job prolog and the user environment prolog.
      • Obsolete code which attempts to terminate left over job processes is removed.
      • LoadLeveler enables the use of mdcr 5 for checkpoint/restart on AIX. The name of the mdcr library will be changed to libmdcr5.so and the binary ll_mdcr-checkpoint will be built as a 64 bit binary since libmdcr5 is 64 bit.
    • Blue Gene:
      • Once LoadLeveler defects the error of BlueGene I/O node or compute node, it will put the nodes into drain state. And if a block fails to boot for three times, it will be destroyed.
    • Scheduler only:
      • The scheduler will ignore any floating resource requirement with a 0 value.
      • A dead lock problem in resource manager daemon has been fixed.

    Problems fixed in LoadLeveler 5.1.0.11 [August 27, 2012]

    • The coredump problem when fetching step adapter usage information has been fixed.
    • It has been fixed that the command "llstatus -l -L" shows submit-only node down.
    • The negotiator daemon now correctly free the memory so that the core dump will not occur.
    • Resource Manager only:
      • The Region Manager has been modified to ignore all adapters on the same subnet as the adapter that was filtered out with adapter_list. Instead of the Region Manager marking those adapters down, those adapters will remain in an HB_UNKNOWN state.
    • Blue Gene:
      • If a Blue Gene job terminates due to a kill timeout, the node used by the job is availble for future jobs after the block in use has been freed.
    • Scheduler only:
      • Only the messages from the last iteration of topdog scheduling is printed out in the command "llq -s".The intermediate message is not printed.
      • The accounting record which has a negative wall clock value is now skipped by the llsummary command.

    Problems fixed in LoadLeveler 5.1.0.10 [July 20, 2012]

    • The region manager failover and recovery code is changed to ensure that the resource manager is notified when a region manager becomes active which makes all active nodes and adapters available for scheduling.
    • Resource Manager only:
      • The resource manager daemon will not crash once startup LL if set D_FULLDEBUG for RESOURCE_MGR_DEBUG in LoadL_config file.
    • Blue Gene:
      • LoadLeveler Changes to use the new checkIO() call for V1R1M1 BlueGene software.
      • The dependency check for the libbgsched shared object is removed from the LoadLeveler Blue Gene rpm so that the rpm nodeps option is no longer required.
      • LoadLeveler llqres command will display the information for the Blue Gene reservation which specifying bg_block.
      • A check that was preventing Blue Gene reservations from being modified has been fixed so the change request can be processed.
      • When some nodeborard is down in one midplane, the Blue Gene small block job can run in the midplane if the resource can meet the job requirement.
      • The nodeboard list that is returned from the BGQ scheduler API may not always be in order. LoadLeveler will sort this list to ensure it is in order before indexing on it.

    Problems fixed in LoadLeveler 5.1.0.9 [June 19, 2012]

    • Update for LINUX on 64-bit Opteron or EM64T processors ONLY
    • Implemented internal LoadLeveler data contention improvements.
    • Jobs were rejected when the schedd daemon was unable to determine the protocol versions for the nodes allocated to a job step it was trying to dispatch. The corre ct protocol version is being called now so that the jobs will be started correctly.
    • Fixed Negotiator daemon memory leaks.
    • Incorrect error messages seen for user prolog/epilog during llctl ckconfig command which is fixed by correcting the internal user variables names.
    • Corrected inefficiency when reading configuration data from the database and protect against these kinds of performance issues that had caused LoadLeveler from st arting when large systems are configured.
    • Corrected the lldbupdate to be able to update from 5.1.0.6 to 5.1.0.9.

    Problems fixed in LoadLeveler 5.1.0.8 [June 15, 2012]

    • Update for LINUX on POWER ONLY
    • Implemented internal LoadLeveler data contention improvements.
    • Jobs were rejected when the schedd daemon was unable to determine the protocol versions for the nodes allocated to a job step it was trying to dispatch. The correct protocol version is being called now so that the jobs will be started correctly.
    • Fixed Negotiator daemon memory leaks.
    • Incorrect error messages seen for user prolog/epilog during llctl ckconfig command which is fixed by correcting the internal user variables names.
    • Corrected inefficiency when reading configuration data from the database and protect against these kinds of performance issues that had caused LoadLeveler from starting when large systems are configured.
    • Corrected the lldbupdate to be able to update from 5.1.0.6 to 5.1.0.8.

    Problems fixed in LoadLeveler 5.1.0.7 [June 8, 2012]

    • Do not install LL 5.1.0.7 service update if you are using or planning to use a database for the LoadLeveler configuration.
    • The llstatus command shows the startds to be up even though the llrstatus command shows the startd and the region manager they report to is actually down. The central manager will now be notified by the resource manager when the startd is marked as down by the resource manager so the llstatus command will now show the correct output state as the llrstatus command.
    • Fixed llconfig from core dumping if trying to add a new machine_group or region to a cluster that has more than 128 machines.
    • Fixed llconfig to correctly set the island in the maching_group.
    • Blue Gene:
      • LoadLeveler will correctly calculate the I/O ratio per midplane based on hardware state to support a mixed I/O environment on Blue Gene/Q.

    Problems fixed in LoadLeveler 5.1.0.6 [April 27, 2012]

    • Mandatory service pack for Red Hat Enterprise Linux 6 (RHEL6) on POWER servers.
    • The CAU value is now allocated correctly on all the nodes on which the job is run.
    • Resource Manager only:
      • Fixed dead lock in Region manager daemon when determining heartbeat status and llstatus information will now show the correct status after reconfig.
      • Fixed startd daemon from core dump when preempting a running job via suspend method.
      • Fixed checking of process tracking during job termination so jobs will be able to terminate correctly in an environment that does that hve process tracking set.
    • Blue Gene:
      • Enhanced the support for Blue Gene block booting failures by draining problem hardware from the LoadLeveler cluster.
      • Fixed problems with LoadLeveler scheduling blocks using pass through.
      • Updated llq -h command output to reflect changes in Blue Gene terminology (Partitions are now referred to as Blocks)
      • Corrected display of connectivity for large blocks in llsummary output.
      • Fixed a problem calculating the minimum block size for LoadLeveler jobs when midplanes contain error with iolinks.

    Problems fixed in LoadLeveler 5.1.0.5 [April 4, 2012]

    • Fixed some memory leaks in Startd and Schedd daemons.
    • If there is no network statement in the job command file, then the default network is used, which assumes ethernet. If the cluster does not have ethernet configured, then the job will stay in the "ST" state and not run. The default network support will now use the adapter associated with the hostname with which the machine is configured in the administration file.
    • Fixed LoadL_master from core dumping during llctl stop in database environment due to timing locks.
    • Fixed LoadL_negotiator from core dumping by not sending corrupted job step data to the central manager.
    • Fixed lldbupdate command from getting the 2544-019 error message by parsing the database information correctly so LoadLever will be able to start up.
    • Resource Manager only:
      • A problem in pe_rm_connect() that caused read() to be called on a socket that was not ready to be read has been corrected, allowing pe_rm_connect() to continue to retry to the connection for the specified rm_timeout amount of time.
    • Scheduler only:
      • The list of reserved resources was not being updated properly when the reservation requesting a 0 count ended, leading to the core dump. That reservation list is now being being updated correctly in all cases.

    Problems fixed in LoadLeveler 5.1.0.4 [March 16, 2012]

    • Mandatory service pack for Red Hat Enterprise Linux 6 (RHEL6) and SUSE LINUX Enterprise Server 11 (SLES11) on servers with 64-bit Opteron or EM64T processors.
    • LoadLeveler can now display the host name correctly based on the name_server configuration. The previous limitation of the name_server keyword being ignore is now lifted.
    • On SLES11, the lldbupdate fails to connect to the database due to incorrect odbc.ini location is now corrected.
    • Fixed Linux schedd daemon core dump in a mixed AIX and Linux cluster when submitting a job from the AIX cluster.
    • Fixed potential central manager deadlock.

    LoadLeveler Corrective Fix listing
    Fix Level AIX APAR numbers
    LL 5.1.0.19 resource manager: IV70828 IV66650 IV66649 IV66648 IV65684 IV64830 IV64585 IV63275 IV61832 IV61464 IV60185 IV69100 IV58447 IV57426
    scheduler: IV70829 IV70828 IV66932 IV65080 IV64830 IV64585 IV61832 IV61464 IV60708 IV60185 IV69101 IV58447
    LL 5.1.0.18 resource manager: IV54299 IV54330 IV00018 IV54332 IV54333 IV54334 IV53405 IV55247 IV55997 IV55306
    scheduler: IV54300 IV44552 IV00018 IV48119 IV50037 IV54335 IV54336 IV55248 IV55307 IV55998
    LL 5.1.0.14 resource manager: IV33552 IV29510 IV34246 IV34249 IV34251 IV32497
    scheduler: IV23900 IV34256 IV34254 IV34248 IV34247 IV34257 IV34258 IV32624 IV34259 IV34250 IV34252 IV34255 IV34818 IV33554
    LL 5.1.0.13 resource manager: IV29990 IV31169 IV31171 IV31174 IV31177 IV31178 IV31267
    scheduler: IV30011 IV31170 IV31172 IV31175 IV31176 IV31179 IV31400
    LL 5.1.0.8 resource manager: IV25429 IV18362 IV27838 IV27839 IV23242 IV23444 IV27848 IV25772
    scheduler: IV25430 IV27835 IV20675 IV21261 IV27840 IV25447
    LL 5.1.0.7 resource manager: IV23818 IV22084
    scheduler: IV23819 IV23820 IV23821
    LL 5.1.0.6 resource manager: IV16600 IV19851 IV19911 IV20248
    scheduler: IV18061 IV19617 IV19910
    LL 5.1.0.5 resource manager: IV17276
    scheduler: IV18682
    LL 5.1.0.4* resource manager: IV13778 IV14094 IV14182 IV14380 IV14458 IV15105 IV16304 IV16306
    scheduler: IV14096 IV14408 IV15545 IV16135
    LL 5.1.0.3 resource manager: IV11682 IV11747 IV11750 IV11938 IV12585
    scheduler: IV11748 IV12199 IV12586 IV12587 IV12588
    LL 5.1.0.2 resource manager: IV07387 IV07389 IV08361 IV08531
    scheduler: IV07388 IV07390 IV08362 IV08364 IV08532
    LL 5.1.0.1 resource manager: IV03487 IV03490 IV03498 IV05131 IZ03487
    scheduler: IV03488 IV03489 IV03496 IV03497 IV03499 IV03500 IV05132 IZ03488

[{"Business Unit":{"code":"BU058","label":"IBM Infrastructure w\/TPS"},"Product":{"code":"SGS8DD","label":"Tivoli Workload Scheduler LoadLeveler for AIX"},"Component":"","ARM Category":[],"Platform":[{"code":"PF025","label":"Platform Independent"}],"Version":"","Edition":"","Line of Business":{"code":"LOB57","label":"Power"}},{"Business Unit":{"code":"BU058","label":"IBM Infrastructure w\/TPS"},"Product":{"code":"SGS8DD","label":"Tivoli Workload Scheduler LoadLeveler for AIX"},"Component":"","ARM Category":[],"Platform":[{"code":"PF025","label":"Platform Independent"}],"Version":"","Edition":"","Line of Business":{"code":"LOB57","label":"Power"}}]

Document Information

Modified date:
17 March 2022

UID

isg400002206