Readmes are available
IBM Tivoli 4.0.0-TIV-SSM-FP0010
IBM Tivoli 4.0.0-TIV-SSM-FP0011
IBM Netcool System Service Monitor SSM 4.0 Fix Pack 12 README 4.0.0-TIV-SSM-FP0012
IBM Netcool System Service Monitor SSM 4.0 Fix Pack 13 README 4.0.0-TIV-SSM-FP0013
IBM Netcool System Service Monitor SSM 4.0 Fix Pack 14 README 4.0.0-TIV-SSM-FP0014
IBM Netcool System Service Monitor SSM 4.0 Interim Fix 1 README 4.0.0.14-TIV-SSM-IF0001
IBM Netcool System Service Monitor SSM 4.0 Interim Fix 2 README 4.0.0.14-TIV-SSM-IF0002
IBM Netcool System Service Monitor SSM 4.0 Interim Fix 3 README 4.0.0.14-TIV-SSM-IF0003
IBM Netcool System Service Monitor SSM 4.0 Interim Fix 4 README 4.0.0.14-TIV-SSM-IF0004
IBM Netcool System Service Monitor SSM 4.0 Interim Fix 5 README 4.0.0.14-TIV-SSM-IF0005
IBM Netcool System Service Monitor SSM 4.0 Interim Fix 6 README 4.0.0.14-TIV-SSM-IF0006
IBM Netcool System Service Monitor SSM 4.0 Interim Fix 7 README 4.0.0.14-TIV-SSM-IF0007
IBM Netcool System Service Monitor SSM 4.0 Interim Fix 8 README 4.0.0.14-TIV-SSM-IF0008
IBM Netcool System Service Monitor SSM 4.0 Interim Fix 1 README 4.0.0.15-TIV-SSM-IF0001
APAR status
Closed as program error.
Error description
The SSM agent on the Solaris 10 server with 128 CPUs is core dumping every time I try to load any configurations. Whether I try to load a simple logMonX or my base SSM config that I have running on ~500 other servers right now. This is a simple Global zone with no other sub zones configured. It is a simple Solaris 10 installation at this point. I patched with FP7 and still see the same problem Ticket History: -------------------- Customer have a big quad core CPU servers that show 128 CPU via SSM agent. Proviso discovers this OK but there is no data being graphed. There might be some high end limit that we are running into. Plus there are around 25 SAN devices hooked off this server. So, customer think there is some kind of upper limit we are running into and the SSM agent is choking when starting and Proviso is either not getting the data from the SSM agent or it is not handling it correctly either.
Local fix
ssm40-memsetfix1-solaris-sparc.run
Problem summary
**************************************************************** USERS AFFECTED: Customers running SSM 4.0 on Solaris 10 SPARC. **************************************************************** PROBLEM DESCRIPTION: ssmagent.bin crashes (SEGV) in memset, malloc or free, and dumps core. **************************************************************** RECOMMENDATION: Upgrade to fix pack 8 for SSM 4.0. ****************************************************************
Problem conclusion
Under some rare conditions on Solaris 10, the SPARC-optimized memset function in libc_psr would overrun the memory it is given. The SSM solution is to insert the "memset.so" workaround which replaces the flawed memset function. We believe this bug is known to Sun as Change Request 6507249 and was probably introduced in a recent Solaris patch 137137 (up to and including revision 09). A fix for Solaris 10 has not yet been released by Sun. The fix for this APAR is contained in the following maintenance packages: | fix pack | 4.0.0-TIV-SSM-FP0008
Temporary fix
Comments
APAR Information
APAR number
IZ49683
Reported component name
NETCOOL SYS SVC
Reported component ID
5724P4300
Reported release
400
Status
CLOSED PER
PE
NoPE
HIPER
NoHIPER
Special Attention
NoSpecatt / Xsystem
Submitted date
2009-04-23
Closed date
2009-04-29
Last modified date
2009-08-11
APAR is sysrouted FROM one or more of the following:
APAR is sysrouted TO one or more of the following:
Fix information
Fixed component name
NETCOOL SYS SVC
Fixed component ID
5724P4300
Applicable component levels
R400 PSN
UP
Document Information
Modified date:
11 August 2009