Configuring servers for high availability

You can set up multiple servers for high-availability configurations. In such configurations, multiple servers either operate at the same time to provide more capacity or operate in sequence to provide failover protection.

Install a load balancer. This load balancer distributes requests to the servers. Regardless of whether the servers are all active or some are active and others are on passive standby, most setups use automated failover with an external load balancer.

To set up servers in a high-availability configuration, you install the server on separate systems and connect the servers to the same database. Then, you configure a load balancer to distribute the traffic between the servers. Instead of accessing the servers directly, users access the load balancer URL. To the users, that URL appears to host a single instance of the server with high capacity; the users are unaware of the multiple servers.

You can configure your load balancer to support either cold standby or clustered high-availability scenarios.
  • In a clustered scenario, all servers are running, and the load balancer routes processes to all servers, based on their availability. This configuration can improve server performance.
  • In a cold standby scenario, one server is active at a time. If the active node fails, the secondary server is started and network traffic is routed to it. This configuration reduces downtime nearly to zero but does not improve server performance.
  1. Install and configure the database as usual.
    If you already have a database, you can use it for the clustered servers. See Installing the database.
  2. If you already have one or more servers, convert them to cluster servers with the following steps:
    1. Stop the server.
    2. On the server, open the installation_directory/conf/server/installed.properties file in a text editor.
    3. In this file, update the public.url property to the URL and port of the load balancer.
      Include escape colons and other special characters with a backslash (\), as shown in the following example:
      public.url=http\://balancer.example.com\:8080
      Note: The load balancer must be able to communicate with the UrbanCode Release servers.

      After the server is started, the property is set in the database.

      The public URL can also be specified on the System Setttings page.

    4. Save the file.
  3. To install new cluster servers, install the servers as usual, but with the following changes:
    • Connect each server to the same database.
    • For the host name that the users access, specify the host name of the load balancer, not the computer that hosts the server.
    • If you are installing the server on the same computer as another server, use a different port for HTTPS requests for each server.
    See Installing the server. Be sure to note the ports for each server because you must have this information later. The default port for HTTP requests is 8080, and the default port for HTTPS requests is 8443.
  4. Start each server.
  5. Log in to one server, and select the Keep me logged in check box.
  6. Open the installed.properties file for that server.
  7. In the installed.properties file, find the cookie.key property, and copy it.
    This property specifies a key that is included in a cookie when a user logs in. It will be included in the installed.properties file for each server so that users do not have to sign in separately on each server.
  8. In the installed.properties file for each server, add the following properties.
    ha.activation.enabled=yes
    ha.node.name=node_name
    cookie.key=cookie_key_value
    • For node_name, specify a unique node name for each server. After you set up the servers, this node name appears on each server. Knowing which server you are using can help you debug problems. To see the name of the node that you are using, click Help at the top of any page. The menu includes the node name, as shown in the following figure:
      The help menu for the server, showing the node name
    • For cookie.key property, place the cookie key that you copied from the first server. This key must be the same on each server.
      Note: You do not need to add this property to the server that you copied the information. You need to add only the other two lines.

    Restart all the servers for the updates to take affect.

    For example, the code that you add to the installed.properties file might look like the following example:
    ha.activation.enabled=yes
    ha.node.name=HA node 1
    cookie.key=D3ZizBbRSWFjdOQ8N2a/yQ\=\=
  9. To store plug-ins on a shared directory, add the plugins.folder.path property for each server, and specify the shared directory, as in the following example:
    plugins.folder.path=/
    To use plug-ins in an HA configuration, plug-ins must be installed in a shared directory that is accessible by all nodes. This parameter is optional for non HA installations.
  10. To share images that are used in notification templates, specify the notification.images.folder.path property for each server, and specify the shared directory.
  11. Optional: To store attachments on a shared directory, add the attachments.folder.path property and specify the shared directory, as in the following example:
    attachments.folder.path=/
    If the property is not specified, an attachments folder is created in the configuration area.
  12. Restart each server.
  13. Configure a load balancer to share the load between the servers according to your requirements.
    For more information, see the documentation for your load balancer.
After you configure the load balancer to distribute connections to the servers, users can connect to a single URL and use the capacity of all of the servers. The servers also ensure that the correct number of licenses are used, even if a user accesses multiple servers.