This document describes how to use the monitoring server automated deployment status table clearing feature.
Each time you run a command-line interface or us the portal client to remotely manage an agent, information about the transaction is preserved in the monitoring server's deployment status table. To make it easier to manage the contents of this table especially in large scale environments, you can schedule the periodic clean-up of completed transactions from the table. Enabling this feature allows you to review completed deployment transactions at opportune times as well as reduce the amount of monitoring server overhead by maintaining a modest table size.
To schedule periodic clearing of completed transactions from the deployment status table, you need to specify the frequency. You can do this by adding an environment variable (CLEARDEPLOYSTATUSFREQ) to the monitoring server's configuration file so that the server will enable automated clearing at startup or by setting the environment variable on an already running monitoring server by using the service console interface. If you use the service console, you can set the clearing interval, or disable the clearing altogether.
When automated clearing is enabled, the monitoring server will automatically find deployment transactions that have completed, remove them from the deployment status table to reduce its size, and record information about the deleted transaction in a log file for analysis. The automated clearing runs at the hourly interval you specify with the environment variable. The hourly interval is based on the monitoring server start time.
Two ways to enable the automated clearing of the deployment status table
- Modify the monitoring server environment file
You can enable this feature for monitoring server startup by adding the following environment variable to the monitoring server environment file:
where X is the number of hours between the automated clearing of the deployment status table. If X is zero or this environment variable is not specified in the environment file, automatic clearing of the table is disabled. Valid values are zero to 720.
For example, to enable clearing the table every hour after the monitoring server starts up, complete the following:
Procedure by platform:
1. On the system where the Tivoli Enterprise Monitoring Server is installed, select Start → Programs → IBM Tivoli Monitoring → Manage Tivoli Monitoring Services.
2. Right-click the monitoring server, select Advanced and then Edit ENV File from the menu.
3. If the monitoring server message displays, click OK to close it.
4. Add a new line CLEARDEPLOYSTATUSFREQ=1.
5. Click Save.
6. Click Yes to implement your changes and recycle the service.
1. Change directory (cd) to <install_dir>/config , where <install_dir> is the location of your monitoring installation.
2. Add the following line to the <hostname>_ms_<temsname>.config and ms.ini files: CLEARDEPLOYSTATUSFREQ=1, where <hostname> and <temsname> are case-sensitive hostname and monitoring server names respectively.
3. Save the file.
4. Recycle the monitoring server.
- Modify the monitoring server environment using the service console interface
You can enable, change the interval, or even disable this feature by setting the environment variable using the service console interface.
Refer to the document Tivoli IBM Tivoli Monitoring Troubleshooting Guide, Chapter Tools, "Using the IBM Tivoli Monitoring Service Console" for information on how to set or reset a monitoring server environment variable. The variable that is associated with the automatic clearing of the deployment status table is CLEARDEPLOYSTATUSFREQ.
You can alter the hourly interval value (X) by entering the following command at the service console prompt.
bss1 setenv CLEARDEPLOYSTATUSFREQ=X
where X is the number of hours between the automated clearing of the deployment status table. If X is zero or this environment variable is not specified, automatic clearing of the table is disabled. Valid values are zero to 720.
How to override the log file path name
You can change the location of the log file where the monitoring server records the transactions which have been cleared from the deployment status table by specifying the environment variable CLEARLOG in the same configuration files described above. However, unlike the CLEARDEPLOYSTATUSFREQ environment variable, the CLEARLOG cannot be changed using the service console.
The default file name is cleardeploystatus.log which is located in the logs subdirectory of the monitoring installation. You may change this default setting to any valid, fully-qualified path name on the local system or to any fully-qualified path name on a mounted file system. Using a mounted file system is useful when there is a monitoring server and a server backup. By using a mounted file system as the destination, the logs for both systems can be set to the same fully-qualified path name to accommodate failover conditions.
Note: The active hub monitoring server performs the automated clearing of the deployment status table for the entire enterprise. If you have a backup monitoring server for the hub, then you should also set the environment variable on the backup to the same value as specified on the HUB so that the clearing process will occur in the event that the primary monitoring server fails over.
Log file contents
The log file path name will contain an entry for each deployment transaction that has completed and has been removed from the deployment status table by the automated clearing process. The text is preceded by the timestamp of when the entry was written to the log.
Each time the monitoring server is started, the monitoring server will log the following text to the log file:
--- Clear Deploy Status Log ---
Each transaction that is cleared from the deployment status table will have the following information written to the log file:
Transaction ID - Global transaction identifier of the transaction that completed
Submitted - Timestamp that the transaction was initially submitted for processing
Command - The deployment command processed
Status - The completion status (SUCCESS or FAILURE)
Retries - The number of times the transaction was tried before it completed
Monitoring server - The name of the monitoring server responsible for processing the transaction
Target hostname - The Managed System Name or Managed Node identifier where the command completed
Platform - The reported platform architecture of the OS agent executing on the target
Product - The product code of the agent for which the transaction was processed
Version - The version of the product for which the transaction was attempted
Completion message - If the status returned is failure, an explanation of the reason for the failure
More support for:
ITM Tivoli Enterprise Mgmt Server V6
Software version: 18.104.22.168
Operating system(s): AIX, AIX 64bit, HP-UX, Linux, Linux Red Hat - iSeries, Linux Red Hat - pSeries, Linux Red Hat - xSeries, Linux Red Hat - zSeries, Linux SUSE - iSeries, Linux SUSE - xSeries, Linux SUSE - zSeries, Linux xSeries, Linux zSeries, Linux/x86, Solaris, Solaris Opteron, Windows, Windows 2000, Windows 2003 server, Windows 7, Windows Vista, Windows XP
Software edition: All Editions
Reference #: 1592089
Modified date: 12 October 2012
Translate this page: