Policies

To add policies to the virtual system pattern, click Add policy to pattern and select a policy, or select a component part on the canvas and click the Add a Component Policy icon to add a component-specific policy.

You can add these policies to the pattern or a component:

Base Scaling Policy

Scaling is a runtime capability to automatically scale your virtual system instance as the load changes. A scaling policy component defines this capability and the conditions for processor and memory, under which scaling activities are performed.
Important: Parameters for scaling are configured when you add the scaling policy to the pattern. These values are retained even if more components are later added to the pattern. You might need to adjust the scaling parameters if you add more components after you originally configure the scaling policy.
Specify the following attributes for the scaling policy:
Number of instances
Required. Specifies the number of virtual machines to be created. The default value is 1. In 2.3.3.0 and 2.3.3.2, the default value range is 1 - 10.

From IBM® Cloud Pak System 2.3.3.3 onwards, the number of instances for any virtual machine in an OpenShift® Container Platform cluster pattern can be set to 0. For patterns with CoreOS worker virtual machines, the pattern must also contain a non-CoreOS node with a Worker Scaling Policy defined. This allows the pattern deployer to start initially with 0 Workers, and later you can add as many Workers as needed. The number of instances of Master, Bootstrap, and Helper nodes can also be set to 0. However, these types of nodes do not have a scaling policy, so they cannot be added later like worker nodes. For these virtual machines, the virtual machines are not created whenever you set the scaling policy to 0 during deployment.

This configuration can be useful in multi-system deployments where one system does not need any Master or Primary Helper nodes.

Instance number range of scaling in and out
Required. Specifies the scaling range for instance members that are hosting the topology. The default range for this attribute is 1 - 10. In 2.3.3.0 and 2.3.3.2, the acceptable value range is 1 - 50. In 2.3.3.3, the acceptable value range for non-OCP patterns is 1 - 50. For OpenShift Container Platform accelerators and patterns, the minimum number of nodes can be set to 0. If there is a Worker Scaling Policy running on the Primary Helper node, the Workers can be scaled in or out any time after deployment. If an OpenShift Container Platform Bootstrap, Master, or Helper node has its number of instances set to 0, no instances of that type get deployed and the number cannot be scaled out later.
Maximum vCPU count per virtual machine
Specifies the maximum number of virtual processors that are used by the topology.

The acceptable value range is 2 - 8.

Maximum memory size per virtual machine
Specifies the maximum amount of virtual memory, in gigabytes, that are used by your application.

The acceptable value range is 4 GB - 128 GB.

CPU Based
Scaling in and out when CPU usage is out of threshold range
Specifies the processor threshold condition to start scaling activity. When the average processor use of a virtual machine is out of this threshold range, the specified scaling action is taken. If the policy is set at the pattern level, scaling actions are taken if the processor use of any virtual machine in the topology is out of the threshold range. If the policy is set at the component level, scaling actions are taken if the processor use of the virtual machine where the component is running is out of the threshold range. The acceptable value range is 0 - 100%.
Minimum time (seconds) to trigger add or remove
Required. Specifies the time duration condition to start scaling activity. The default value is 300 seconds. The acceptable value range is 30 - 1800.
scaling by
Specifies the action to take when scaling is triggered. You can select add vCPU only, add or remove nodes, or add or remove nodes; add vCPU first and then add nodes.
Memory Based
Scaling in and out when memory usage is out of threshold range
Specifies the memory threshold condition to start scaling activity. When the average memory usage of your topology is out of this threshold range, the specified scaling action is taken. The default value is 20 - 80%. The acceptable value range is 0 - 100%.
Minimum time (seconds) to trigger add or remove
Required. Specifies the time duration condition to start scaling activity. The default value is 300 seconds. The acceptable value range is 30 - 1800.
then scaling by
Specifies the action to take when scaling is triggered. You can select add virtual memory only, add or remove nodes, or add or remove nodes; add virtual memory first and then add nodes.
Note: If the middleware requires a restart before its virtual memory can be increased, such as with IBM WebSphere® Application Server, the middleware is restarted in a rolling method so that there is no downtime during the scaling.
Note: You can select CPU-based scaling, memory-based scaling, or both. If you select more than one type, the relationship between the types is handled by the system as an OR. For example, if you select processor and memory-based scaling, with the action to add or remove a node, this action is triggered if the condition set for either processor or memory scaling is met.

DB Proxy Policy

Use the DB Proxy Policy to enable a database proxy between an application and an existing database. The DB Proxy Policy specifies database rules for clients accessing databases at the table, row, and data cell level. For more information about the DB Proxy Policy and configuring database proxy rules, see Working with database rules for the DB Proxy Policy.

After you add the DB Proxy Policy, you can specify the following basic attributes required to create a connection to the existing database. These attributes include specifying the database rule set used to govern client access to the target database for the deployed pattern:
Proxy name
A unique name for the database proxy.
Rule Set name
Select a database rule set from the list of available rule sets. Only rule sets in Completed state are available for selection.
Proxy database port
The port number for the proxy database.
Proxy database user
The user name authorized to access the proxy database.
Proxy database password
The password that accompanies the specified proxy database user name. You must specify the password twice to verify it is entered correctly.
Target database name
The name of the target database.
Target database host
The host name for the target database.
Target database port
The port number for the target database.
Target database driver
The database driver that is used to connect the database proxy with the target database.
Target database user
The user name authorized to access the specified target database.
Target database password
The password that accompanies the specified target database user name. You must specify the password twice to verify it is entered correctly.

You can click Add Next as needed to configure additional database proxies between the application and the target database.

At deployment time you can also modify these configurations if they have not been locked. A database proxy is installed on the virtual machine for each DB Proxy Policy that is added to the component in the virtual system pattern.

After the pattern is deployed, you might need to update or modify the database proxy rule configuration. From the Instance Console for the deployed pattern instance, use the Manage > Operations link for the DBPROXY_POLICY operation. From there you can update the rule set that contains the database proxy rules, update the credentials for accessing the target database, or change the port number for the proxy database as needed. Click Submit to update the DB Proxy Policy configuration settings.

IBM Spectrum Scale Client Policy

Add a policy to define the pattern component as a IBM Spectrum Scale Client. This policy contains the required IBM Spectrum Scale installation process for connecting the deployed pattern instance to a IBM Spectrum Scale Server. The deployed pattern instance can then access and use the shared file system that is associated with the IBM Spectrum Scale Server. After you add the policy, configure the settings in the properties pane for your IBM Spectrum Scale Client. For more information on these settings, see the related links.

IM policy

Add an Installation Manager policy to customize the Installation Manager configuration for the pattern. This policy includes the following attributes:
Install location
Required. Specify the installation location of Installation Manager. This directory must not be the same directory, a parent directory, or subdirectory of the Installation Manager data directory. Ensure that write permissions are defined for this path for the defined User ID (specified in the user id attribute).
dataLocation
Required. Specify the directory for the Installation Manager data directory. This location stores information about installed packages. Ensure that write permissions are defined for this path for the defined User ID (specified in the user id attribute). If the user does not have existing permissions, you can optionally create an add-on to grant the user permission to the specified directory . This add-on might also verify that the directory exists.
eclipseCache
Optional. This attribute specifies the location of the shared resources directory. This location is stored in the com.ibm.cic.common.core.preferences.eclipseCache preference key, which is defined in a response file. The shared resource directory is specified the first time that you install a software package. You cannot change this location after you install a software package. Ensure that write permissions are defined for this path for the defined User ID (specified in the user id attribute). If the user does not have existing permissions, you can optionally create an add-on to grant the user permission to the specified directory . This add-on might also verify that the directory exists.
When you define this attribute in the Pattern Builder, the defined value is included in the lifecycle script for the pattern and can be queried through the maestro.vsys.im.getEclipseCache() maestro API. The value is also set in the Installation Manager response.xml file. When this value is defined in the response.xml file, the Installation Manager client places the shared resources in the specified location.
Note: If this value is not used by the middleware plug-ins in the pattern, the directory that is specified by this attribute is not created.
User ID
Required. Install the Installation Manager binary files and packages with this user ID.
Restriction: The root user is not allowed.
After this policy is added to a pattern, Installation Manager is installed by using the specified user ID to the specified installation directory with the user-configured data location and Eclipse caching location, if it is specified.
Restriction: You cannot add an IM policy to any existing pattern without changing the implementation of the plug-ins that are included in the pattern to make use of the IM configuration data that is specified by the policy.

Interim fix policy

Add an interim fix policy to install emergency fixes during deployment to the pattern or to a particular virtual machine, depending on whether you apply the policy at the pattern-level or component-level.

After you add the interim fix policy, choose one or more fixes from the list to apply at deployment time. The fixes that display in the list are emergency fixes that were uploaded to the emergency fixes catalog (through Catalog > Emergency Fixes) or to the Installation Manager repository (through Catalog > IBM Installation Manager Repository).

You can add multiple fixes to one interim fix policy or you can add multiple interim fix policies with just one fix for each policy. However, if the fixes are pulled from the Emergency Fixes catalog, the first option does not ensure the order in which the fixes are applied, which might cause problems. If the fixes are pulled from the Installation Manager Repository, the underlying Installation Manager technology has the knowledge to apply the fixes in the correct order.

Interim fixes in the Emergency Fixes catalog can be configured to be applicable to an image, plug-in, or middleware. Only fixes that are applicable to an image, plug-in, or middleware that is in the pattern are displayed.

Interim fixes in the IBM Installation Manager repository can be configured to be applicable to middleware. Only fixes that are applicable to middleware that is in the pattern are displayed.

Network Redirector Policy

Use the Network Redirector Policy to specify redirection of the network flow of outgoing requests from the application.

The set of applications that run on the virtual machine might use a hardcoded set of addresses and ports to communicate to other applications that might be hosted on a different set of addresses and ports. You can add connections on the Network Redirector Policy to redirect outgoing traffic from what is specified by the application to the redirected destination.

You can specify addresses in host name or IPv4 format.

After you add the policy, you can specify the following attributes to define the first connection:
Connection Name
(Required) The name of the outgoing connection.
Destination Address
(Required) The host name or IP address to which the application is sending requests.
Destination Port
(Optional) The port number for the outgoing connection that is used by the application.
Redirect Destination Address To
(Required) The address to which the destination address needs to be redirected.
Redirect Destination Port To
(Optional) The port number to which the destination port needs to be redirected.

You can click Add Next as needed to configure additional redirection connections.

Note the following limitations:
  • If you have two images within the same pattern, you cannot redirect network traffic from an application that is located in one image to an application on the other image because the address where the application is hosted is not known until after it is deployed.
  • You cannot have more than one connection with the same values for the Destination Address and Destination Port combination.
    Note: However, you can have more than one connection with the same values for the Redirect Destination Address To and Redirect Destination Port To combination).
  • If two or more connections specify the destination address as a host name, and all of the connections specify the same host name, then the redirected destination address for each connection must also all be the same.

At deployment time, you can also modify these configurations.

After the pattern is deployed, you might need to delete, add, or update the network redirector policy configurations. From the Instance Console for the deployed pattern instance, use the Manage > Operations link for the Network Redirector Policy. From there you can add, delete, or modify existing connections. Click Submit to update the network redirection configuration settings.

Placement Policy

Add a policy to add placement constraints to the pattern, such that deployed virtual machines are placed according to this policy. Always use the placement policy along with the base scaling policy to apply placement hints for additional instances of a particular node in the pattern. After you add the policy, specify the following attributes to define the placement constraints to apply to the deployment:
Level
This attribute specifies the level at which the placement constraint applies. Possible choices are:
  • Location: The placement constraint applies at the system level.
  • Cloud Group: The placement constraint applies at the cloud group level.
Collocation
This attribute specifies whether the virtual machine instances should be placed at the same location or cloud group. Possible choices are:
  • Collocated: Place virtual machine instances at the same location or cloud group.
  • Anti-collocated: Place virtual machine instances at different locations or cloud groups.
Hard or Soft Constraint?
This attribute specifies whether the placement constraint must be satisfied or not. Possible choices are:
  • Hard: The placement constraint must be satisfied or the deployment fails.
  • Soft: If the placement constraint cannot be satisfied, the deployment continues anyway.
Note: The placement policy is only useful when it is associated with a particular node in the pattern, and applies only to virtual machine instances for that specific node. Although it is possible to add the placement policy at the pattern level, rather than to a particular node, no placement policy is applied because there is no way to link one or more nodes to the placement policy with this configuration.

Routing policy

Add a routing policy to configure the topology or component for load balancing provided by one of the following services:
  • On Demand Router based Load Balancer service introduced in Cloud Pak System 2.2.5.0
  • Elastic Load Balancing Service available prior to Cloud Pak System 2.2.5.0.
After you add the policy, specify the following attributes for the routing policy:
Endpoint
The endpoint is the URL that is used to access the application through Elastic Load Balancing.
Specify the URL in the following format:
http[s]://<virtual_host_server>/<context_root>

The <context_root> must be a non-blank name of an application on the specified server and should not contain wild card character asterisk (*) .

Port
Enter the port that is used by the topology (for a pattern-level policy) or component (for a component-level policy). This port is opened on the firewall when the pattern deploys.
Enable HTTPS
Specify whether HTTPS is used by the topology (for a pattern-level policy) or component (for a component-level policy). If you select Enable HTTPS, you must enter the port that the topology or component uses for HTTPS. If the port is specified, it is opened on the firewall when the pattern deploys.
Use ODR Load Balancer Service
Check this checkbox to use the ODR Load Balancer Service. Uncheck this box to use Elastic Load Balancer Proxy Service.

Security policy

Add a security policy to configure whether Secure Shell (SSH) password login is available and whether you can log in through SSH with the root account. Add the security policy at the pattern level to enable or disable SSH password and root login for all virtual machines that are deployed as part of the pattern. Add the security policy to individual components of a pattern to enable or disable SSH password and root login for the virtual machines that are deployed for that part of the pattern.

After you add the policy, configure the disable SSH password login and disable SSH root user login settings in the properties pane.
Note:
  • Configuring SSH password log in with this policy does not affect your SSH key login configuration.
  • If you disable SSH password authentication without enabling SSH key authentication, you will not be able to log in to the virtual machine at all. However, you can use the instance console to enable SSH key authentication for a deployed virtual system.
  • A pattern level security policy overrides any component level security policies.
Results for a pattern level security policy:
  • If you add a pattern level security policy and select Disable SSH password login (for Virtual System Patterns only) and Disable SSH root user login, you cannot use password authentication to log in to any virtual machine that is deployed with this pattern.
  • If you add a pattern level security policy, do not select Disable SSH password login (for Virtual System Patterns only) and select Disable SSH root user login, you can use password authentication to log in to any virtual machine that is deployed with this pattern. You cannot log in to the virtual machines through SSH with the root user account, but you can still log in with other accounts, such as virtuser.
  • If you add a pattern level security policy and do not select Disable SSH password login (for Virtual System Patterns only) or Disable SSH root user login, you can use password authentication to log in to any virtual machine that is deployed with this pattern. You can log in to the virtual machines through SSH with the root user account.
Results for a component level security policy, with no pattern level security policy:
  • If there are no pattern level or component level security policies, you can use password authentication to log in to the virtual machines that are deployed as part of the component. You can log in to the virtual machines through SSH with the root user account.
  • If you add a component level security policy and select Disable SSH password login (for Virtual System Patterns only) and Disable SSH root user login, you cannot use password authentication to log in to the virtual machines that are deployed as part of the component.
  • If you add a component level security policy, do not select Disable SSH password login (for Virtual System Patterns only) and select Disable SSH root user login, you can use password authentication to log in to the virtual machines that are deployed as part of this component. You cannot log in to the virtual machines that are deployed as part of this component through SSH with the root user account, but you can still log in with other accounts, such as virtuser.
  • If you add a component level security policy and do not select Disable SSH password login (for Virtual System Patterns only) or Disable SSH root user login, you can use password authentication to log in to the virtual machines that are deployed as part of this component. You can log in to the virtual machines that are deployed as part of this component through SSH with the root user account.
Tip: If you move a policy from one virtual image or component to another virtual image or component, the configuration settings are preserved except for the hosted link on the previous virtual image or component.