Configuring the agent on Windows systems

You can configure the agent on Windows systems by using the IBM Performance Management window.

Procedure

  1. Click Start > All Programs > IBM Monitoring agents > IBM Performance Management.
  2. In the IBM Performance Management window, right-click Monitoring Agent for Hadoop.
  3. Click Configure agent.
    Attention: If Configure agent is disabled, click Reconfigure.
    The Configure Monitoring Agent for Hadoop window opens.
  4. To monitor the Hadoop cluster with the Kerberos SPNEGO-based authentication enabled, complete these steps:
    1. Under Is Kerberos SPNEGO-based authentication for HTTP based Hadoop services in Hadoop cluster enabled, click Yes.
      If you do not have Kerberos SPNEGO-based authentication to secure REST endpoints of HTTP based Hadoop services in the Hadoop cluster, click No and then the values for the Realm name, KDC Hostname, SPNEGO principal name and SPNEGO keytab file fields can be kept as blank.
    2. In the Realm name field, enter the name of the Kerberos realm that is used to create service principals.
      Usually, a realm name is the same as your domain name. For instance, if your computer is in the tivoli.ibm.com domain, the Kerberos realm name is TIVOLI.IBM.COM This name is case sensitive.
    3. In the KDC Hostname field, enter the fully qualified domain name (FQDN) of the Key Distribution Center (KDC) host for the specified realm.
      You can also specify the IP address of the KDC host instead of FQDN. In case of Active Directory KDC, Domain controller is the KDC host.
    4. In the SPNEGO principal name field, enter the name of the Kerberos principal that is used to access SPNEGO authenticated REST endpoints of HTTP-based services.
      The name is case sensitive, and the name format is HTTP/fully_qualified_host_name@kerberos_realm
    5. In the SPNEGO keytab file field, enter the name of the keytab file for the SPNEGO service with its full path, or click Browse and select it.
      The keytab file contains the names of Kerberos service principals and keys. This file provides direct access to Hadoop services without requiring a password for each service. The file can be located at the following path: etc/security/keytabs/
      Ensure that the SPNEGO principal name and the keytab file belong to the same host. For instance, if the principal name is HTTP/abc.ibm.com@IBM.COM, the keytab file that is used must belong to the abc.ibm.com host.
      If the agent is installed on a remote computer, copy the keytab file of the principal to the remote computer at any path, and then specify this path in the SPNEGO keytab file field.
    6. Click Validate Parameters to validate the Kerberos parameters for the Kerberos based Hadoop cluster.
      After you clicked Validate Parameters, an appropriate validation message is displayed when:
      • The value for Realm name is blank.
      • The value for KDC Hostname is blank.
      • The value for SPNEGO principal name is blank.
      • The value for SPNEGO keytab file is blank.
      Update the configuration values as suggested in the validation messages and validate parameters again.
    7. Click Next.
  5. To monitor Hadoop cluster with HTTPS/SSL enabled, complete these steps:
    1. Under Are Hadoop daemons-HDFS, YARN and MapReduce/MapReduce2 SSL enabled, click Yes
      If you do not want the SSL enabled Hadoop cluster select No and then the values for the TrustStore file path, TrustStore Password fields can be kept as blank.
    2. In TrustStore file path, select the TrustStore file stored at your local machine.
      This file can be copied from the Hadoop cluster to your local machine and then used for configuration.
    3. In TrustStore Password, enter the password you created while configuring the TrustStore file.
    4. Click Validate Parameters to validate the truststore parameters for SSL Hadoop daemons.
      After you clicked Validate Parameters, an appropriate validation message is displayed when:
      • The value for TrustStore File Path is blank.
      • The value for TrustStore Password is blank.
      Update the configuration values as suggested in the validation messages and validate parameters again.
    5. Click Next.
  6. To specify values for the parameters of the Hadoop cluster, complete these steps:
    1. In the Unique Hadoop Cluster Name field, enter the unique name for the Hadoop cluster indicating Hadoop version and flavor. The maximum character limit for this field is 12.
    2. In the NameNode Hostname field, enter the host name of the node where the daemon process for NameNode runs.
    3. In the NameNode Port field, enter the port number that is associated with the daemon process for NameNode. The default port number is 50070.
    4. In the ResourceManager Hostname field, enter the host name of the node where the daemon process for ResourceManager runs.
    5. In the ResourceManager Port field, enter the port number that is associated with the daemon process for ResourceManager. The default port number is 8088.
    6. Optional: In the JobHistoryServer Hostname field, enter the host name of the node where the daemon process for JobHistoryServer runs.
    7. Optional: In the JobHistoryServer Port field, enter the port number that is associated with the daemon process for JobHistoryServer. The default port number is 19888.
    8. Optional: In the Additional NameNode Hostname field, enter the host name where the daemon process for a Standby NameNode or a Secondary NameNode runs.
    9. Optional: In the Additional NameNode Port field, enter the port number that is associated with the daemon process for a Standby NameNode or a Secondary NameNode.
      Remember: If the additional NameNode is a Standby NameNode, the default port number that is associated with the Standby NameNode daemon process is 50070. If the additional NameNode is a Secondary NameNode, the default port number that is associated with the Secondary NameNode daemon process is 50090.
    10. Click Test Connection to verify connection to the specified host names and ports.
      After you click Test Connection, an appropriate validation message is displayed when:
      • The connection to the specified host names and ports is made or failed.
      • A value for a host name is kept as blank.
      • A value for a port is kept as blank.
      • A non-integer value is specified for a port number.
      Update the configuration values as suggested in the validation messages, and verify the connection again.
    11. Optional: To add Standby ResourceManagers in the Hadoop cluster, click Yes under Standby ResourceManager (s) in Hadoop Cluster.
      You are prompted to add the details of Standby ResourceManagers later.
    12. Optional: To monitor Hadoop services in the Hadoop cluster that is managed by Apache Ambari, click Yes under Monitoring of Hadoop services for Ambari based Hadoop installations, and then click Next.
    13. Optional: To monitor Cloudera Manager services in the Cloudera Hadoop cluster, click Yes under Monitoring of Cloudera Manager services for Cloudera Hadoop installations, and then click Next.
  7. Optional: To specify the details of Kerberos Configuration for REST endpoints of Ambari Server, complete the following steps:
    1. Under Is Kerberos authentication enabled for the REST endpoints of Ambari Server, click Yes.
      If you do not have Kerberos authentication enabled for REST endpoints of Ambari Server, click No and the values for the fields Realm name, KDC Hostname, Ambari principal name, Ambari keytab file, Ambari Server Hostname and Ambari Server Port can be blank.
    2. In the Realm name field, enter the name of the Kerberos realm that is used to create service principals.
      Usually, a realm name is the same as your domain name. For instance, if your computer is in the tivoli.ibm.com domain, the Kerberos realm name is TIVOLI.IBM.COM This name is case sensitive.
    3. In the KDC Hostname field, enter the fully qualified domain name (FQDN) of the Key Distribution Center (KDC) host for the specified realm.
      You can also specify the IP address of the KDC host instead of FQDN. In case of Active Directory KDC, Domain controller is the KDC host.
    4. In the Ambari principal name field, enter the name of the Ambari principal that is used to access Kerberos authenticated REST endpoints of Ambari Server.
      The name is case sensitive, and the name format is ambari-server-username@kerberos_realm.
    5. In the Ambari keytab file field, enter the name of the keytab file for the Ambari service with its full path, or click Browse and select the file.
      The keytab file contains the names of ambari service principals and keys. This file provides direct access to Rest endpoints of Ambari Server without requiring a password for each service. The file can be located at the following path: etc/security/keytab/.
      If the agent is installed on a remote computer, copy the keytab file of the principal to the remote computer at the designated path, and then specify the path in the Ambari keytab file field.
    6. In the Ambari server Hostname field, enter the host name where the Ambari server runs.
    7. In the Ambari server Port field, enter the port number that is associated with the Ambari server.
      The default port number is 8080.
    8. Click Next.
  8. Optional: To specify the details of the Ambari server for monitoring Hadoop services, complete the following steps:
    1. In the Ambari server Hostname field, enter the host name where the Ambari server runs.
    2. In the Ambari server Port field, enter the port number that is associated with the Ambari server.
      The default port number is 8080.
    3. In the Username of Ambari user field, enter the name of the Ambari user.
    4. In the Password of Ambari user field, enter the password of the Ambari user.
    5. In the Are Ambari Services SSL enabled user field, click Yes.

      If you do not want the SSL enabled Ambari Services select No and then the values for the TrustStore file path, TrustStore Password fields can be kept as blank.

    6. In TrustStore file path, select the TrustStore file stored at your local machine. This file can be copied from the Hadoop cluster to your local machine and then used for configuration.
    7. In TrustStore Password, enter the password you created while configuring the TrustStore file.
      Note: If the values for the fields TrustStore file path and TrustStore Password are provided in Step 5 and are same, then the values for fields TrustStore file path and TrustStore Password can be kept as blank.
    8. Click Next.
  9. Optional: To specify the details of the Cloudera Manager server for monitoring Cloudera Manager services, complete the following steps:
    1. In the Cloudera Manager server Hostname field, enter the host name where the Cloudera Manager server runs.
    2. In the Cloudera Manager server Port field, enter the port number that is associated with the Cloudera Manager server.
      The default port number for HTTP based Cloudera Manager server is 7180.
    3. In the Username of Cloudera Manager server user field, enter the name of the Cloudera Manager server's user.
    4. In the Password of Cloudera Manager server user field, enter the password of the Cloudera Manager server's user.
    5. In the Are Cloudera Manager Services SSL enabled user field, click Yes.

      If you do not want the SSL enabled Cloudera Manager Services select No and then the values for the TrustStore file path, TrustStore Password fields can be kept as blank.

    6. In TrustStore file path, select the TrustStore file stored at your local machine. This file can be copied from the Hadoop cluster to your local machine and then used for configuration.
    7. In TrustStore Password, enter the password you created while configuring the TrustStore file.
      Note: If the values for the fields TrustStore file path and TrustStore Password are provided in Step 5 and are same, then the values for fields TrustStore file path and TrustStore Password can be kept as blank.
    8. Click Next.
  10. To specify values for the Java™ parameters, complete these steps:
    1. From the Java trace level list, select a value for the trace level that is used by Java providers.
    2. Optional: In the JVM arguments field, specify a list of arguments for the Java virtual machine.
      The list of arguments must be compatible with the version of Java that is installed along with the agent.
    3. Click Next.
  11. Optional: To add Standby ResourceManagers, complete the following steps:
    1. Click New.
    2. In the Standby ResourceManager Hostname field, enter the host name of the node where the daemon process for Standby ResourceManager runs.
    3. In the Standby ResourceManager Port field, enter the port number that is associated with the daemon process for Standby ResourceManager. The default port number is 8088.
    4. Click Test Connection to validate connection to the specified host name and the port number.
      After you click Test Connection, an appropriate validation message is displayed when:
      • The connection to the specified host names and ports is made or failed.
      • A value for a host name is kept as blank.
      • A value for a port is kept as blank.
      • A non-integer value is specified for a port number.
      Update the configuration values as suggested in the validation messages, and verify the connection again.
    5. Repeat steps a, b, and c to add more Standby ResourceManagers.
      If you want to remove any of the Standby ResourceManagers, click Delete corresponding to the Standby ResourceManager that you want to remove.
    6. Click Next.
  12. In the Class path for external jars field, specify the class path for JAR files.
    This class path is added to the class path that is generated by the agent. You can keep this field blank.
  13. Click OK.
    The specified configuration settings are saved.
  14. Right-click Monitoring Agent for Hadoop and click Start.

What to do next

  1. Enable the subnode events to view eventing thresholds of the Hadoop agent. For information about enabling subnode events, see Configuring the dashboard for viewing Hadoop events.
  2. Log in to the Cloud APM console to view data that is collected by the agent in the dashboards. For information about using the Cloud APM console, see Starting the Cloud APM console.