Configuring auto-scalable clusters for JVM elasticity

You can configure a collective to support Java virtual machine (JVM) elasticity. With JVM elasticity, the scaling controller can start or stop Liberty servers based on resource use and scaling policies. Only the servers that are already in the collective are eligible for scaling. There is no provisioning of new servers.

Before you begin

The types of collected resource usage information vary among JDKs. IBM JDK 1.7 for Windows and Linux® operating systems provides all necessary usage information for auto scaling and is the preferred JDK. Other JDKs might not provide all necessary usage information for auto scaling based on the individual JVM resource usage.
Avoid trouble: The administrative console allows a start and a stop of a Liberty server that is a cluster member of an auto-scalable cluster, but only when the server is in maintenance mode. Starting or stopping a Liberty server from the command line when the Liberty server is a cluster member of an auto-scalable cluster can lead to unpredictable results.

Procedure

  1. Create a collective.

    For details on creating a collective controller and member server, see Configuring a Liberty collective.

    Note: It is recommended that you complete the first step before you continue with the procedure. The first step instructs the user to create the collective, add members, and start the controllers and members.
  2. Add the scalingController-1.0 feature to the server.xml file of one or more collective controllers. When you save the server.xml file, default policies are enforced unless otherwise specified.
    <featureManager>
     <feature>jsp-2.2</feature>
     <feature>collectiveController-1.0</feature>
     <feature>scalingController-1.0</feature>
    </featureManager>

    After you add the feature, the following messages display in any order in the messages.log of the collective controller, provided the collective controller is running:

    CWWKV0300I: The StackManager service started.
    CWWKV0302I: The existing stacks are []
    CWWKV0100I: The ScalingController feature is activated.
    CWWKX1002I: Singleton service ScalingControllerSingletonService for scope 
    CWWKV0102I: This server is elected to be the primary scaling controller.
    CWWKF0012I: The server installed the following features: [scalingController-1.0].
    Note: Since Liberty configuration is dynamic, when you add the scaling controller, the controllers's default scaling policy takes effect and you might get unexpected results. For example, the default policy has min=2 servers, so when you save the scaling controller server.xml file the controller attempts to start two servers. If you don't want that behavior, you might want to define a policy for the controller at the same time.
    Note: It might take some time for the scaling controller to register the member and display the CWWKV0121I message.
  3. Optional: Change the default value of the scaling policies to meet the needs of your environment. For more information, seeDefining scaling policies to manage workload for more information.
  4. Add the scalingMember-1.0 feature to all collective members that you want the scaling controller to control.
    For IBM i platformsRestriction: The Scaling Member feature (scalingMember-1.0) is not available on the IBM i platform.
    Define a hostSingleton element with a port in the member server.xml files. Each scaling member needs to define a hostSingleton element with a port in the server.xml. All scaling members on the same host must use the same port. You can specify any port number, but the port number must be unique on the host computer. The following example uses port number 20020:
    <featureManager>
     <feature>jsp-2.2</feature>
     <feature>scalingMember-1.0</feature>
    </featureManager>
    
    <hostSingleton name="ScalingMemberSingletonService" port="20020" />

    If the server is not started when you add the features and the hostSingleton element, you must start it manually once for the scaling controller to recognize the added features. The following messages display in any order in the messages.log of the collective member:

    CWWKX1000I: The SingletonMessenger MBean is available.
    CWWKX7400I: The ClusterMember MBean is available.
    CWWKX1002I: Singleton service ScalingMemberSingletonService for scope host is created.
    CWWKV0200I: The ScalingMember feature is activated.
    CWWKX1004I: Messenger connection is connected to host=controller_host_name, port=controller_port_number.

    Only one scaling member per host communicates with the scaling controller. The first scaling member to connect to the ScalingMemberSingletonService is elected as the host leader. If the host leader stops, then another scaling member takes over as the host leader by an election process that is arbitrated by the scalingMemberSingletonService. All the scaling members on the same host and cluster must use the same ScalingMemberSingletonService port.

    Note: When a scaling member is elected as the host leader, you see the following message in the messages.log of the collective member:
    CWWKV0203I: Server host=host_name; userdir=path_to_usr_directory; server=member_name; port=member_port_number; service=ScalingMemberSingletonService; scope=host is elected as the host leader.
    Note: If you do not add the hostSingleton element to the scalingMember server.xml or if you use different ports on each scalingMember on the same host, multiple host leaders might be elected. This can result in incorrect scaling decisions. You see this message in the controller's messages.log:
    CWWKV0123E: Duplicate host singleton leaders have been detected on host host_name.  This condition may degrade scaling controller decisions.  The leader identity of server server_name1 is leader_id1.  The leader identity of server server_name2 is leader_id2.

    For more information about the hostSingleton element, see Collective Member.