Direct links to fixes
6.3.6.000-TIV-TSMRPT-AGENT-Linux
6.3.6.000-TIV-TSMRPT-AGENT-AIX
6.3.6.000-TIV-TSMRPT-AIX
6.3.6.000-TIV-TSMRPT-AGENT-Windows
6.3.6.000-TIV-TSMRPT-Linuxx86
6.3.6.000-TIV-TSMRPT-Linuxx86_64
6.3.6.000-TIV-TSMRPT-WindowsI32
6.3.6.000-TIV-TSMRPT-WindowsX64
6.3.6.000-TIV-TSMAC-WindowsX64
6.3.6.000-TIV-TSMAC-WindowsI32
6.3.6.000-TIV-TSMAC-SolarisSPARC
6.3.6.000-TIV-TSMAC-Linuxx86
6.3.6.000-TIV-TSMAC-Linuxs390x
6.3.6.000-TIV-TSMAC-AIX
6.3.6.000-TIV-TSMALL-SolarisSPARC
6.3.6.000-TIV-TSMALL-WindowsX64
6.3.6.000-TIV-TSMSTA-WindowsI32
6.3.6.000-TIV-TSMALL-HP-UX
6.3.6.000-TIV-TSMALL-Linuxppc64
6.3.6.000-TIV-TSMALL-AIX
6.3.6.000-TIV-TSMALL-Linuxs390x
6.3.6.000-TIV-TSMALL-Linuxx86_64
7.1.3.000-TIV-TSMSRV-WIN
7.1.3.000-TIV-TSMSRV-SolarisSPARC
7.1.3.000-TIV-TSMSRV-Linuxx86_64
7.1.3.000-TIV-TSMSRV-Linuxs390x
7.1.3.000-TIV-TSMSRV-Linuxppc64
7.1.3.000-TIV-TSMSRV-HP-UX
7.1.3.000-TIV-TSMSRV-AIX
IBM Tivoli Storage Manager V6.3 Fix Pack 6 (6.3.6.000) Server Downloads
APAR status
Closed as program error.
Error description
During Node Replication, on the Target Server, the user may see unexpected high memory usage for the dsmserv process. Though the 'show alloc' data reports this memory has been released by the dsmserv process, the system does not free this memory for other processes in a timely manner. The user may also see unexpected system memory paging. High memory usage can be seen when many Node Replication sessions are working on the same filespace and get blocked on the target server. This is typically caused by issuing a separate Replicate Node command for only one or a few filespaces, and keeping the default MAXSESSions at ten. Tivoli Storage Manager Versions Affected: V6.3 V7.1 L2 Diagnostics: Example of the Show Alloc data: ... Server-tracked memory statistics: ---------------------------------------------------- Total memory allocated 74283125144 bytes Base MemUtil (amount requested) 74282414873 bytes Actual mem allocated 75771572352 bytes ... imqueue.c line 5118: xxxx entries for 68399950024 bytes (PreAllocChunkForInvUpdateItems) thread yyy(psSessionThread parent=zzz) ... This one thread is using 99% of the used memory. A later Show Alloc output, for the same process, is using much less memory (after the Node Replication was done on the Target server): imqueue.c line 5118: x entries for 31509576 bytes (PreAllocChunkForInvUpdateItems) thread www(psSessionThread parent=zzz) Initial Impact: High Additional Keywords: system paging crashed crash dump core out of memory DBMEMPERCENT
Local fix
To reduce the memory usage, use node_group_name instead of single node name for each REPLICATE NODE command. If required to use a single node name, in the REPLICATE NODE command, then set the MAXSESSions to 2 or 1
Problem summary
**************************************************************** * USERS AFFECTED: * * All Tivoli Storage Manager server users of node replication * **************************************************************** * PROBLEM DESCRIPTION: * * See Error Description * **************************************************************** * RECOMMENDATION: * * Apply fixing level when available. This problem is * * currently project to be fixed in levels 6.3.6.0 and 7.1.3.0 * ****************************************************************
Problem conclusion
The problem is fixed. Affected platforms: AIX, Solaris, Linux, Windows, and HP-UX
Temporary fix
Comments
APAR Information
APAR number
IT08512
Reported component name
TSM SERVER
Reported component ID
5698ISMSV
Reported release
63A
Status
CLOSED PER
PE
NoPE
HIPER
NoHIPER
Special Attention
NoSpecatt / Xsystem
Submitted date
2015-04-23
Closed date
2015-05-15
Last modified date
2015-05-15
APAR is sysrouted FROM one or more of the following:
APAR is sysrouted TO one or more of the following:
Fix information
Fixed component name
TSM SERVER
Fixed component ID
5698ISMSV
Applicable component levels
R63A PSY
UP
R63H PSY
UP
R63L PSY
UP
R63S PSY
UP
R63W PSY
UP
R71A PSY
UP
R71H PSY
UP
R71L PSY
UP
R71S PSY
UP
R71W PSY
UP
Document Information
Modified date:
15 May 2015