Feedback


5765-F62 IBM PowerHA for AIX V5.5

IBM United States Sales Manual
Revised:  April 21, 2010.

Table of contents  Document options 
TOC Link Product Life Cycle Dates TOC Link Description
TOC Link Program Number TOC Link Technical Description
TOC Link Abstract TOC Link Planning Information
TOC Link Product Positioning TOC Link Publications
TOC Link Highlights TOC Link Security, Auditability, and Control
 
Printable version Printable version
 

 
Product Life Cycle Dates
Program NumberVRM Announced Available Marketing Withdrawn Service Discontinued
5765-F625.05.02008/10/072008/11/14 2010/09/302012/04/30
5765-F625.04.12007/11/062007/11/09 2010/09/302011/09/30
5765-F625.04.02006/07/252006/07/28 2010/09/302011/09/30
5765-F625.03.02005/07/122005/08/12 2008/09/302009/09/30
5765-F625.02.02004/06/222004/07/16 2006/04/302007/09/30
5765-F625.01.02003/06/242003/07/11 2005/03/312006/09/01

Back to topBack to top
 
Program Number
  • PowerHA for AIX, V5.5 (5765-F62)
  • PowerHA for AIX 5L SW Maintenance 1 Yr (5660-HMP)
  • PowerHA for AIX 5L SW Maintenance 1 Yr AL (5661-HMP)
  • PowerHA for AIX 5L SW Maintenance 3 Yr (5662-HMP)
  • PowerHA for AIX 5L SW Maintenance 3 Yr RNWL (5663-HMP)
  • PowerHA for AIX 5L SW Maintenance 3 Yr AL (5664-HMP)

Back to topBack to top
 
Abstract

PowerHA V5.4.1

PowerHA V5.4.1 helps protect critical business applications from outages. For over a decade, PowerHA has been providing reliable monitoring, failure detection, and automated failover for 24 x 7 business application environments. The optional PowerHA Extended Distance (PowerHA/XD) feature adds unlimited distance data mirroring and recovery solutions for critical business needs; the optional PowerHA Smart Assist feature helps you easily deploy high availability into your critical applications.

PowerHA V5.4.1 offers you:

  • Integrated support for utilizing AIX Workload Partition (WPAR) to maintain high availability for your applications by configuring them as a resource group and assigning the resource group to an AIX WPAR. By using PowerHA in combination with AIX WPAR, you can leverage the advantages of application environment isolation and resource control provided by AIX WPAR along with the high availability feature of PowerHA V5.4.1.

  • PowerHA/XD support of PPRC Consistency Groups to maintain data consistency for application-dependent writes on the same logical subsystem (LSS) pair or across multiple LSS pairs. PowerHA/XD responds to PPRC consistency group failures by automatically freezing the pairs and managing the data mirroring.

  • A new Geographical Logical Volume Manager (GLVM) Status Monitor that provides the ability to monitor GLVM status and state. These monitors enable you to keep better track of the status of your application data when using the PowerHA/XD GLVM option for data replication.

  • Improved support for NFS V4, which includes additional configuration options, as well as improved recovery time. PowerHA can support both NFS V4 and V2/V3 within the same high availability environment.

  • Usability improvements for the WebSMIT GUI, including the ability to customize the color and appearance of the display. Improvements to First Failure Data Capture and additional standardized logging are designed to increase the reliability and serviceability of PowerHA V5.4.1.

  • New options for detecting and responding to a partitioned cluster. Certain failures or combinations of failures can lead to a partitioned cluster, which, in the worse case, can lead to data divergence (out-of- sync data between the primary and backup nodes in a cluster). PowerHA V5.4.1 introduces new features for detecting a partitioned cluster and avoiding data divergence through earlier detection and reporting.

Research shows that most application outages are caused not by hardware failures, but by network or application failures, or by external causes. Downtime remains a threat to business continuity. IBM High Availability Cluster Multi-processing (PowerHA) V5.4 helps protect critical business applications from failures. For over a decade, PowerHA has been providing reliable monitoring, failure detection, and automated failover for 24 x 7 business application environments. The optional PowerHA Extended Distance (PowerHA/XD) feature adds unlimited distance data mirroring and recovery solutions for critical business needs; the optional PowerHA Smart Assist feature helps you easily deploy high availability into your critical applications.

PowerHA V5.4 brings together the strengths of IBM Systems and TotalStorage with enhanced ease of integration and use, as well as expanded geographic capabilities, to provide you with a single, world class source of protection for your mission-critical applications and data.

PowerHA V5.4: Simpler. Faster. Goes the distance.

Simpler:

  • A Web-based GUI enables cluster management from a single console.

  • New Smart Assists help streamline PowerHA implementation in many popular application environments.

Faster:

  • The ability to configure and maintain clusters and applications without stopping them reduces application outages.

  • Faster failure detection results in higher availability and less downtime.

Goes the distance:

  • PowerHA/XD support of intermixed IBM DS storage devices provides excellent disaster recovery capability.

  • IP Address Failover enables you to better manage network communications between sites.

  • PowerHA/XD GLVM support for multiple networks and concurrent mode access provides improved reliability, usability, performance, and protection for data mirroring.

Back to topBack to top
 
Product Positioning

PowerHA is the solution for the estimated 80% of application downtime that is not caused by processor failures.

The prospective customer for PowerHA solutions is any enterprise with requirements to keep business-critical applications and systems operational 7 days per week, 24 hours per day. An PowerHA solution helps you avoid downtime; enables prompt recovery from any hardware, network, and application failures; and also gives you the means to take down an individual server (node) for planned maintenance and upgrades without having to take down the entire cluster.

High availability is a growing business need across all industries. The following industries are prime opportunities for PowerHA solutions:

  • Finance/Banking: Nearly 100% require high availability; federal regulations mandate backup sites.

  • Retail (including online and catalog sales): Back-end office operations and business intelligence operations.

  • Healthcare/Insurance: Data mining, data warehousing, and claimant data.

  • Telco/Utilities/Media: Continuous operation of networks and switching equipment.

  • Distribution/Process: Round-the-clock operation; just-in-time delivery.

  • Manufacturing: Continuous access to operational/plant logistics data.

  • Education: Administration data and central data mirroring/backup.

PowerHA opportunities exist in these industries for the following applications:

  • Database/OLTP
  • Enterprise Resource Planning
  • Network computing
  • Business intelligence
  • Any customer application that utilizes any combination of disks and networks

The PowerHA/XD feature is a must for customers with business-critical data who want to mirror data between separate sites to aid in disaster recovery. This applies to businesses of any size, with multiple sites or regional operations, or wherever decentralization of data is desired.

PowerHA is an attractive, affordable high availability solution for small and medium-sized enterprises, and for small and medium-sized business units of large enterprises. High availability should be a fundamental buying criterion for business-critical and on demand applications.

In addition to providing high availability, PowerHA 5.4.0 can also be configured to provide loosely coupled multiprocessing services. These configurations allow workload to be spread across multiple System p servers, sharing the disk and processor resources of the clustered nodes. This clustered approach, along with the capability of application failover and recovery/restart of the PowerHA 5.3 configured machine, offers additional levels of high availability processing for customer- critical environments. The Concurrent Resource Manager function of PowerHA 5.4.0 provides an open API that applications can utilize to get concurrent access to a shared disk.

Positioning PowerHA and other cluster servers:

PowerHA is a robust offering for mission-critical availability for up to 32 nodes. It is IBM's strategic high availability offering designed and tuned for System p servers running AIX 5L.

Advantages of PowerHA relative to competitive products:

  • PowerHA provides a broad range of configuration options over all platforms, including a greater number of nodes (32), node interconnect protocols, storage systems, and disk interconnects, allowing increased flexibility of configurations.

  • PowerHA leads the industry in unlimited-distance geographic clustering to support data mirrors and disaster recovery.

  • PowerHA is a cluster technology proven over a decade of service.

  • A rich skills base has been developed in the industry for implementation and support of PowerHA.

  • PowerHA supports System p and System i servers with the most advanced autonomic features in the industry.

  • PowerHA is fully aligned with IBM's On Demand Business strategy to deliver necessary IT infrastructure to meet constantly changing business needs.

  • PowerHA for AIX 5L V5.3 and V5.4 (5765-F62) and HACMP for Linux 5.4 (5765-G71) expands support for IBM System p570, Model 9117-MMA.

Back to topBack to top
 
Highlights

PowerHA V5.4.1

IBM High Availability Cluster Multi-Processing (PowerHA) V5.4.1 offers robust high availability and disaster recovery for IBM System p and System i customers with mission-critical applications.

New PowerHA V5.4.1 features include:

  • AIX Workload Partitions (WPAR)

  • PowerHA/XD support of IBM TotalStorage disk subsystem (PPRC) including Consistency Groups

  • New GLVM monitoring

  • NFSv4 support improvements

  • PowerHA usability and RAS improvements

  • New options for detecting and responding to a partitioned cluster

The optional features PowerHA/XD and HACMP Smart Assist for AIX V6.1 provide high availability disaster recovery solutions for your business.

PowerHA V5 offers robust high availability and disaster recovery for IBM System p and System i customers with mission-critical applications.

New features in V5.4 include:

  • Web-based GUI

  • Improved cluster verification tools

  • Smart Assists for turnkey integration with DB2, Oracle, and WebSphere applications

  • Nondisruptive PowerHA cluster startup, upgrades, and real-time maintenance, without application downtime

  • Fast Failure Detection and takeover of production applications on backup servers

  • Metro Mirror support for intermixed environments (DS6000, DS8000, ESS800)

  • PowerHA/XD GLVM Multi-Link feature for improved data mirroring protection and performance

  • Concurrent mode access for simultaneous applications execution at local site

The optional features PowerHA/XD and HACMP Smart Assist provide complete high availability disaster recovery solutions for your business on AIX 5L.

A new PowerHA V5 offering for Linux is now also available for high availability failover on POWER platform.
Back to topBack to top
 

Description

PowerHA V5.4.1

AIX Workload Partitions

WPAR, a new feature of AIX V6.1, is a software-created virtual operating system environment that exists within a single instance of the AIX operating system. To most applications, the WPAR appears to be a separate instance of AIX because applications and WPARs have a private execution environment. Applications are isolated in terms of process, signal, and file system space. Workload partitions have their own unique users and groups, as well as dedicated network addresses. Applications can be defined as being "WPAR enabled" and PowerHA will automatically keep them highly available using available WPAR resources. Using this approach, you can take advantage of the rich AIX WPAR features combined with the high availability features provided by PowerHA.

With PowerHA V5.4.1 support for WPAR, HACMP will use WPAR resources to keep your applications highly available. PowerHA provides high availability by managing the logical collection of inter-related resources (such as applications, volume groups, and IP addresses) as a single unit. With WPAR support, these resources can be assigned to an AIX WPAR at startup or failover (recovery) time. WPAR also lets you control the amount of resources that a certain application should use by assigning a certain percentage of resources (like CPU, memory, and number of processes) to the WPAR that will host the application.

By using PowerHA in combination with AIX WPAR, you can leverage the advantages of application environment isolation and resource control assignment (provided by AIX WPAR) and the high availability feature provided by PowerHA V5.4.1.

PowerHA/XD support of PPRC Consistency Groups

The IBM TotalStorage disk subsystem has a Peer-to-Peer Remote Copy (PPRC) function for replicating data from a storage unit at a primary site to a storage unit at a backup site. This is commonly used in disaster recovery configurations where a copy of critical application data is maintained at a remote location.

PowerHA/XD supports PPRC by automatically managing the disk subsystems at each site. PowerHA responds to failures by sending the appropriate commands to the disk subsystems to manage the data replication.

PPRC replication is done on a per-volume basis. That is, the data written to the volume at the primary site is replicated to the corresponding volume at the backup. Some applications perform logical updates that span multiple volumes; for example, a database application may write a transaction to one volume and a log of that transaction to another volume. In this scenario the PPRC replication of the data to the backup site must preserve the logical association of the updates even though they occur on different volumes.

By using PPRC Consistency Groups, you can maintain data consistency for application-dependent writes on the same LSS pair or across multiple LSS pairs.

PowerHA/XD supports consistency groups and will react to failures by freezing or unfreezing the PPRC pairs. PowerHA/XD V5.4.1 leverages the advanced features of IBM storage subsystems with the availability features of PowerHA for implementing a disaster recovery solution.

GLVM Status Monitor

PowerHA/XD offers a number of data replication options including Geographic Logical Volumes (GLVM). PowerHA/XD with GLVM provides replication of your data to a remote site over IP networks to create an integrated remote replication and high availability disaster recovery solution. PowerHA/XD V5.4.1 introduces two new monitors for GLVM. From SMIT or the command line, these monitors display the status of:

  • GLVM remote physical volumes (RPV)
  • GLVM geographically mirrored volume groups (GMVG)

RPV status information includes the accumulated counts of completed and pending reads, writes, kilobytes read, kilobytes written, and device errors for one or more RPVs. It can also be used to display the maximum recorded numbers of pending reads, writes, kilobytes to be read, and pending kilobytes to be written to an RPV device ("high water mark" values). GMVG status information includes the total number of physical volumes (PV), RPVs, stale volumes, total physical partitions (PP), and stale PPs, as well as the synchronization percentage for one or more GMVGs. Both monitors can run continuously and display updated information on a user-supplied interval basis.

GLVM requires AIX V5.3, or later. GLVM is also available in stand- alone form from base AIX; however, this version does not include integrated support with PowerHA/XD.

NFSv4 support

Network Filesystem (NFS) is a mature industry standard for sharing information in a networked environment. PowerHA provides integrated support for keeping NFS highly available in a cluster configuration. The next generation of NFS is V4.

PowerHA support for NFSv4 includes:

  • Better failover of client state using stable storage

  • Support for configuring NFSv4 exports directly through SMIT

  • Support for configuring a file system to be exported with both NFSv2/3 and NFSv4

  • A Configuration Assistant to help create and modify resource groups with NFS exports

NFSv4 support improvements bring greater convenience for configuring NFSv4 exports, as well as improved failover time.

With a mix of NFSv3 and NFSv4, PowerHA will support both protocols to allow for gradual adoption of the new V4 standard.

NFSv4 support with PowerHA requires at minimum AIX V5.3 with Technology Level 5300-07 (bos.net.nfs.client and bos.net.nfs.server V5.3.7.0) or AIX V6.1.

PowerHA usability and RAS improvements

A number of improvements have been made to the ease-of-use, performance, reliability, availability, and serviceability of the PowerHA product. These improvements include:

  • An updated WebSMIT user interface that adds an industry standard "look and feel" as well as options to customize the interface for local language and color preferences. Setup and performance are also improved.

  • First Failure Data Capture and extended, standardized logging make it easier to maintain your high availability environment.

  • Progress indicators and heartbeat metric displays to keep you informed about the operation and status of your cluster.

Multi-Node Disk Heartbeat and Disk Fencing

This feature provides new capabilities for PowerHA to detect and react to a partitioned cluster. There are two new concepts:

  • Multi-Node Disk Heartbeat (MNDHB): Like regular disk heartbeat networks, the disk subsystem is used as the media for exchanging heartbeat messages. Multi-node disk heartbeat lets you configure network access for multiple nodes instead of the simple point to point network available using regular disk heartbeat.

  • Disk Fencing: When a multi-node disk heartbeat network is configured, PowerHA performs additional checks when a node failure is detected. Each node connected to the MNDHB network will check its access to the disks defined for the network. If a node has access to less than a quorum (one more than half) of the disks, it will exercise a configurable policy to either shut down the node, fence it from the disks, or simply run a notification event.

PowerHA for AIX 5L helps provide availability of applications needed to support your On Demand Business. PowerHA V5.4 continues to demonstrate IBM's cluster technology leadership, with new features to make the product even simpler, faster, and more flexible to integrate into IBM or third-party application environments. PowerHA "goes the distance" by including new AIX 5L geographic mirroring support for remotely located cluster configurations.

PowerHA provides base services for cluster node membership, system management, configuration integrity and control, and failover and recovery for applications. PowerHA clusters with both nonconcurrent and concurrent access can contain up to 32 nodes. A node is an AIX 5L operating system image, and it may be a System p or System i server, an RS/6000 server or SP node, or an LPAR of an applicable System p or System i system.

Easy-to-use status and monitoring facilities are included. Scalability provides these capabilities across entire clusters and allows customers to define their own PowerHA events and monitor their applications. PowerHA V5 fully supports administration of AIX 5L Enhanced Concurrent Mode, thus providing concurrent shared-access management for all supported disk subsystems.

PowerHA provides ease-of-use features to speed configuration of your cluster for a variety of application environments. Whether your objective for availability is at the storage level (such as ESS or SVC), volume group or logical volume level (such as LVM), clustered file system level (such as with GPFS), the application level (such as DB2, Oracle, or WebSphere), or the site level (such as to support disaster recovery), PowerHA has features and options to assist you with successful integration into your environment. After your cluster is configured, the changes are synchronized accurately across all nodes and the cluster can be monitored easily with methods appropriate to your environment.

The optional PowerHA/XD and HACMP Smart Assist features provide additional automated data backup, disaster recovery, and database environment configuration assistance to help protect your business, and are described in following sections.

All of the facilities of PowerHA are available for and with IBM's System p Capacity on Demand (CoD), On/Off CoD, and Capacity Backup (CBU) offerings. This enables you to configure clusters that are scalable and to easily expand clusters' CPU and memory capacity as the need arises, without having to pay upfront for hardware that is not yet used.

Highlights of Version 5.4

PowerHA V5.4 includes enhancements that improve ease of use, performance, and geographic distance capabilities.

New features that make PowerHA V5 simpler:

  • Web-based GUI

    WebSmit enables Web-based cluster management for configuring, monitoring, and managing clusters from the same management console. You can now view multiple clusters from one single, consistent interface. You can also view the cluster configuration and status simultaneously.

  • Resource group management utility improvements

    Enhancements make it easier for you to move resource groups for cluster management, and to maintain previously configured behavior for a resource group, including priority override handling.

  • Verification enhancements

    The cluster verification functionality, which validates your preproduction cluster environment, is upgraded to include a number of customer-requested enhancements. Automated nightly cluster verification reduces the risk that a change to the cluster may interfere with future cluster operation or failover.

  • Enhancements for Smart Assists:
    • PowerHA Application Integration provides a common infrastructure that can be used by all Smart Assists, as well as by the Cluster Test Tool. This enables a more rapid and consistent deployment of future Smart Assists.
    • Oracle Smart Assist was enhanced to self-configure PowerHA to monitor the entire Oracle process stack, including the Oracle Listener, and provide failover capabilities for the Oracle database.
    • Discovery of DB2 and WebSphere components is performed automatically by the PowerHA Smart Assist software and is no longer a separate step in the process.
    • A General Application Smart Assist is provided to help developers more easily construct Smart Assists of their own (applications other than the three that have their own Smart Assists). This program is similar to the 2-Node Configuration Assistant, but is not limited to two nodes. A Smart Assist Developer's Guide is also provided; this includes a sample program for a Smart Assist and information on using the Smart Assist Framework and API.

  • GPFS 2.3 integration

    Basic integration of PowerHA with GPFS 2.3 provides improved file system management.

New features that make PowerHA V5 faster:

  • Nondisruptive startup and upgrade

    Moving applications into a highly available environment with PowerHA is now even easier through the power of nondisruptive installation, configuration, and upgrade. PowerHA 5.4 no longer requires a system shutdown after installation, and you may apply PowerHA service and upgrades without disrupting production applications. The "Forced down handling" feature of PowerHA V5.4 provides a building block for nondisruptive upgrades of PowerHA.

  • Fast Failure Detection on node failure

    This facility gives you option for even higher availability and less downtime by quicker recognition of node halt events by integrating with AIX 5L halt command. Node failures are realized among the nodes in the cluster within one missed heart beat period. This method requires that you configure a disk heart beating network.

New features that enhance geographic distance capability

  • Metro Mirror enhancements

    You can now integrate the disaster recovery functionality of PowerHA/XD into your two-site cluster, utilizing the data mirroring functions resident within DS8000, DS6000, and ESS800 storage devices. Support is also now provided for intermixed environments.

  • IP Address Failover on Geographic Networks

    This feature enables you to manage and move network communications between disparate sites, allowing you to takeover IP workloads at backup site locations.

  • PowerHA/XD GLVM Multi-Link

    GLVM now supports up to four networks, providing improved protection for data mirroring and allowing replication to continue in the event of a network failure. You can take advantage of higher aggregate network bandwidth using this feature where more than one network is available, thus improving mirroring performance across sites.

  • PowerHA/XD GLVM Concurrent mode access

    Concurrent application access in a disaster recovery environment is now available through PowerHA/XD GLVM, allowing multiple node application processing within a production site, while still being able to back up to a secondary site.

Optional PowerHA/XD feature for ESS, DS, and SVC Metro Mirror, geographic LVMs, and IP-based mirroring configurations

This optional feature of PowerHA V5 offers, in one package, multiple technologies for achieving long distance data mirror, failover, and resynchronization:

  • PowerHA/XD GLVM is an IP-based mirroring technology. HACMP/XD GLVM data replication is built upon the AIX 5L Logical Volume Manager (LVM), using it to drive replication and synchronization of AIX 5L logical volumes. For additional information regarding GLVM, refer to Software Announcement 205-085, dated March 15, 2005.

  • PowerHA/XD supports ESS, IBM TotalStorage, and SVC Metro Mirror (formerly known as Peer-to-Peer Remote Copy (PPRC)), providing automatic failover of disks that are configured as PPRC pairs, creating a powerful disaster recovery solution for customers on ESS, DS8000, or SVC. PowerHA/XD automates the management of Metro Mirror, minimizes recovery time after an outage, and monitors your clustered environment to ensure mirroring of critical data is maintained at all times.

  • PowerHA/XD IP-based mirroring provides the well-known unlimited distance data mirroring of the former IBM HAGEO product. PowerHA/XD delivers a fully integrated copy of HAGEO V2.4, allowing a cluster of System p computers to be placed in two widely separated geographic locations, each maintaining an exact replica of the application and data.

Data synchronization during production, failover, recovery, and restoration is provided.

Optional PowerHA Smart Assist feature

PowerHA V5 offers four HACMP Smart Assist applications to help you easily integrate these applications into an PowerHA cluster:

  • Smart Assist for DB2 extends an existing PowerHA configuration to include monitoring and recovery support for DB2 Universal Database (UDB) Enterprise Server Edition.

  • Smart Assist for Oracle provides assistance for installing the Oracle Application Server 10g (9.0.4) (AS10g) Cold Failover Cluster solution on the AIX 5L operating system.

  • Smart Assist for WebSphere has been updated and improved for PowerHA 5.4. It extends an existing PowerHA configuration to include monitoring and recovery support for various WebSphere components, including WebSphere Application Servers and WebSphere Application Server Network Deployment (Deployment Manager). Smart Assist for WebSphere now has a SMIT user interface with options to quickly configure different types of typical cluster configurations.

  • A new General Application Smart Assist is provided for quicker configuration of your application with PowerHA (applications other than the three that have their own Smart Assists). This program is similar to the 2-Node Configuration Assistant, but is not limited to two nodes. A Smart Assist Developer's Guide is also provided; this includes a sample program for a Smart Assist and information on using the Smart Assist Framework and API.

PowerHA and Smart Assist increase the availability of a database solution by:

  • Monitoring the Deployment Manager and automatically restarting it on backup servers

  • Monitoring critical services (such as a TDS server) and automatically restarting them on backup servers if they fail

  • Ensuring that all necessary system resources (for example, storage devices and IP addresses) are configured and made available on backup servers in support of application migration

Accessibility by people with disabilities

A US Section 508 Voluntary Product Accessibility Template (VPAT) containing details on the products accessibility compliance can be requested via IBM's web site at the following URL:

http://www-3.ibm.com/able/product_accessibility/index.html

Back to topBack to top
 
Technical Description
TOC Link Operating Environment TOC Link Hardware Requirements TOC Link Software Requirements


The term "high availability" describes a set of software functions and a computing configuration that recovers from failures and provides a better level of protection against system downtime than standard hardware and software alone.

PowerHA V5 is a high availability software product that runs on each node in a loosely coupled cluster. It provides application availability by detecting and reacting to failures of systems, processors, adapters, networks, disks, or applications. When these failures occur, PowerHA makes use of redundant hardware in the cluster to keep the application running. In the event of a complete node failure, PowerHA restarts the application on a backup node.

When a failure occurs, or when failed components are restored to operation, PowerHA responds based on policies specified when the cluster was defined. The PowerHA product can be extended or tailored by the system administrator to perform extra operations or accommodate additional resource types. However, PowerHA relies on the application to make any failure or recovery transparent to external users and client machines.

If a node fails, nominal recovery time is approximately 30 to 300 seconds. Actual recovery time depends on the system configuration, the application configuration, the size of the user's databases, and any database or application level recovery that must be performed.

PowerHA V5 builds on IBM's position as a leader in high-availability clustering technology with new and improved functionality for:

  • Usability
  • Performance
  • Disaster recovery
  • System administration
  • Additional hardware support

PowerHA is designed to detect system failures and manage failover to a recovery node, providing continuous application availability. (A "node" is an AIX 5L operating system image running an instance of the PowerHA cluster manager. It may be a System p or System i server, an RS/6000 SP node, or an LPAR of an applicable system.)

PowerHA V5 provides services for cluster membership, system management, configuration integrity and control, failover, and recovery for up to 32 nodes. It takes advantage of AIX 5L's Reliable Scalable Cluster Technology (RSCT) to monitor nodes, networks and adapters. Easy-to-use cluster status displays are included. PowerHA allows customers to define their own cluster events and monitor their applications. PowerHA V5 fully supports administration of AIX 5L Enhanced Concurrent Mode, providing concurrent shared-access management for all supported disk subsystems. Concurrent access is provided at the raw logical volume level.

PowerHA/XD (Extended Distance) option provides automated data backup and disaster recovery across geographically-dispersed clusters, protecting business-critical applications against disasters that affect an entire site.

The PowerHA Smart Assist features builds upon existing DB2, Oracle, and WebSphere availability strategies by integrating the power of PowerHA for monitoring and recovering from failures of system-level services and components, reducing database application downtime. These Smart Assists enable system administrators to easily integrate PowerHA in their applications' environment and quickly configure mutual takeover clusters or other cluster configurations that will support WebSphere applications, or DB2 or Oracle database instances.

Highlights of PowerHA V5.4

  • New features that enhance ease of use
  • New Features that enhance performance and security
  • New Features that Enhance Geographic Distance Capability
  • Other changes or enhancements

Product technical information

Cluster Configurations:

An PowerHA cluster is made up of

  • Physical resources, such as nodes, network interfaces and volume groups

  • Logical resources, such as applications (start and stop scripts), service IP addresses, and mounted file systems, that can be activated on any one of an equivalent set of physical resources or cluster nodes. For example,

    • An application can run on any of a set of nodes

    • A service IP address can be made active on any of a set of network interfaces

    • A volume group can be varied on any of a set of nodes

  • Resource groups - collections of logical resources that are related to an application and must be all available on a node for an application to run there.

  • Policies that determine which physical resource will hold a logical resource, when there are multiple choices available.

PowerHA will then, in accordance with the specified policies, move logical resources around so as to keep applications running despite hardware and software failures.

PowerHA V5 defines policies for resource groups in terms of three behaviors:

  1. On which node(s) the resource group containing an application will get started. This is known as a startup policy for the resource group. The choices are:
    • First available node
    • Home (highest priority) node
    • All available nodes
    • Online Using Distribution (to avoid congestion on nodes)

  2. On which node the application will become activated when the current resource group owner node fails. The choices are:
    • Failover to next highest priority node
    • Failover by dynamically selecting the least loaded surviving node
    • Bring offline

  3. On which node the application will become activated when a failed resource group owner comes back online. The choices are:
    • Failback to higher priority node
    • Never fall back

A resource group with an application can be configured to behave exactly as the system administrator managing the application would like. To achieve this, the appropriate resource group policies are selected, along with other customizable attributes.

There are additional parameters that determine how the resource groups behave at startup, failover and failback that allow resource group behavior to be tailored to particular enterprise needs. For instance, resource groups can be configured to be brought back online on reintegrating nodes during off-peak hours.

  • Settling Time. The settling time affects the startup behavior of a resource group. It gives the cluster some time to wait for a higher priority node that may join the cluster, to activate a resource group on that node.

  • Delayed Failback Timer. A resource group's failback can be configured to occur at one of the predefined recurring times: daily, weekly, monthly, and yearly, or on a specific date and time. This allows the outage associated with a failback to coincide with a maintenance window or other planned server downtime.

Some examples of these polices in two-node clusters:

  • Hot Standby cluster configuration: In this configuration, all resource groups are Online on Home Node Only with a single node having the highest priority for them all. The resource groups are also failover to Next Highest Priority Node so that if the owning node fails, the standby node takes over the resources. And, the resource groups are to Higher Priority Node so that when the failed node rejoins the cluster, the resources are returned to the original owning node. That is, one node is normally idle, waiting to recover should the other fail.

  • Rotating Standby cluster configuration: This configuration is identical to a Hot Standby, except that the resource groups are configured to Never fall back. So, when the failed node rejoins the cluster, the resources are not returned to the node until the standby node fails.

  • Mutual Takeover cluster configuration: In this configuration, the resource groups are configured for Hot Standby, but divided among the nodes; some are defined as owned by each node. If either node fails, the other node takes over all of the resources. When the failed node rejoins the cluster, the resources are returned to the original owning node. That is, each node backs up the other.

  • Concurrent Access cluster configuration: In this configuration, two nodes are active simultaneously, sharing the same physical disk resources. The resource groups that contain the disks are configured to come online on All Available Nodes. Any other resource groups are distributed between the two nodes, each owning some of them; the resource groups not owned by both nodes are designated as in the Hot Standby configuration: if either node fails, the other node takes over all of the resources. When the failed node rejoins the cluster, the resources are returned to the original owning node.

With clusters of up to 32 nodes, the inherent configuration flexibility is tremendous, and limited mostly by the physical attachment capabilities of the shared disk subsystems.

  • SCSI disks can be attached to up to four nodes
  • SSA disks can be attached to up to eight nodes
  • Fibre Channel disks can be attached to thirty-two nodes

With this in mind, the cluster designer can select the number of normally active nodes and the number of standby nodes based on the processing and availability goals of the cluster. Likewise, the designer can select to have all resource groups from a failed node go to a single surviving node, or be spread among several. For failures that do not take down an entire node, PowerHA will selectively move only the affected resource groups. PowerHA will also recover any groups that are offline when a failed resource rejoins the cluster.

Data Access Configurations:

PowerHA supports the typical data access configurations used by applications sharing the same data on multiple nodes within a cluster.

  • Concurrent Access

    In a concurrent access configuration, multiple systems each have their own path to the disks holding the data. Any system in the cluster can physically access the data. In this configuration, the systems must cooperate to ensure that accesses and updates do not cause data corruption. Such configurations provide a high degree of scalability, limited only by the number of systems that can simultaneously attach to the shared disks. Note, however, that these types of configurations commonly require an application designed to take advantage of concurrent access. Applications should also have their own locking mechanism.

  • Partitioned Data Access

    Another solution to the problem of multiple systems accessing the same data is to have the data logically partitioned by a system within the cluster with a database manager (DBM) providing access to both partitions. Each system within the cluster has sole access to a dedicated partition of the total data set.

    The system nodes themselves are interconnected through a fast communications link. When a request is made to a particular system, the system decides whether it can locally access the data or not. If not, either the request is forwarded by the DBM to the owning system, or the data is retrieved by the DBM from the owning system.

    Such a configuration can be described as "partitioned data access" because the individual systems can process requests in parallel and access the data distributed across the cluster. In this instance, the cluster scales to the degree that the communications path between the system does not become a bottleneck.

PowerHA provides advantages to either type of data access in the cluster. If the disks can be physically connected to two or more systems, then PowerHA can react to a node failure by restarting the application on another system able to access the same disks.

The configuration flexibility of PowerHA allows customers to choose the cluster topology and data access configuration that most suits the requirements of their computing environment. PowerHA can support concurrent and partitioned data access within a common cluster.

Cluster Manager:

The Cluster Manager is the PowerHA component that monitors the state of cluster resources, reacts to failures, and responds to administrative requests. An instance of the cluster manager runs on each node of the cluster.

The Cluster Manager uses the services of AIX 5L's Reliable Scalable Cluster Technology (RSCT) to monitor nodes, networks and adapters. At the lowest level, RSCT monitors the nodes and network interfaces associated with a cluster by sending "heart beats" - short messages - between the nodes. The pattern of heart beats is chosen so that every node, network, and adapter is constantly monitored. Any loss of heart beat is used to identify failing nodes, networks, or communications interfaces.

PowerHA can also respond to errors detected by AIX 5L (for example, loss of volume group quorum) and application monitors. Application monitors can be configured to detect the failure of a particular node or processor and to execute a custom monitoring function.

PowerHA by default uses a security mechanism that does not need a /.rhosts file. PowerHAs own Security Communications utility replaces the use of the standard rcmd function. This utility is automatically configured to accept requests only from other cluster nodes. Requests are validated based on source IP address and port, and run at least privileged level. If extra security is needed, the security mechanism can be configured to use VPN tunnels.

System Management:

All PowerHA system management can be done using the standard System Management Interface Tool (SMIT), or WebSMIT. This makes the system management easier and more accessible. PowerHA provides a "standard configuration" path and an "extended configuration" path. The standard path eases the configuration task by presenting you with the most relevant and frequently-used options, while aiding pick list selections by using as much automatic discovery of the underlying configuration as possible. If finer control is needed, or there are special requirements, the "extended configuration paths" may be used. In addition, SMIT screens are generated from prior choices whenever possible, thus enabling each successive SMIT screen to contain only the appropriate questions or information related to the particular task.

Furthermore, PowerHA provides several facilities to ease the installation, configuration, and management of a highly available cluster.

  • Version Compatibility allows nodes running these earlier versions of PowerHA to interoperate with those running HACMP V5.4. A customer can upgrade an existing cluster running PowerHA Version V5.1, V5.2, or 5.3 without taking the entire cluster offline. During the upgrade process, individual nodes in the cluster can be removed from the cluster, upgraded one at a time, and then re-integrated into the cluster. Migration aids are available to help convert existing PowerHA configuration files into the form required by PowerHA 5.4. In addition, HACMP V5.4 documentation contains separate detailed roadmaps to walk the administrators through each migration or upgrade scenario they may have in their environment and enable successful upgrade or migration to PowerHA V5.4.

  • The Cluster Snapshot captures a cluster configuration, creating text files that contain all the information necessary to configure a similar cluster. Once captured, these snapshots - created in ASCII text format - can be applied to either this cluster to restore a known good configuration, or to another cluster to instantiate a new configuration. In addition, the cluster snapshot can be read into the online planning worksheets, and the existing worksheet can be reused by the cluster snapshot. This creates more flexibility in how users can preserve and reuse their existing successful cluster configurations.

  • Cluster Single Point of Control enables the user to perform certain common administrative operations across the cluster from a single SMIT session. These operations are: starting and stopping PowerHA; adding, changing, and deleting users and groups, and modifying user's passwords; and operations on shared and concurrent volume groups. For all such operations, the Cluster Single Point of Control facility removes the need to manually synchronize the change across the cluster.

  • Dynamic Reconfiguration allows the user to change the configuration of a running cluster. That is, nodes, networks, and adapters can be added to or removed from the cluster definition. In addition, the definitions of cluster resources and resource group policies can be changed. These changes take effect immediately, without having to disrupt either PowerHA or the applications running on the cluster.

  • clstat provides a text or graphical display of the status of one or more clusters, and of the nodes, interfaces, and resource groups in those clusters. Clstat also lets you view networks and interfaces that are currently up or down in the cluster.

  • Resource Group Management enables customers to use the SMIT interface to move resource groups between nodes and to bring them online or offline. Other options allow groups to be kept persistently on a particular node, overriding normal placement policies, and to suspend and resume application monitoring.

  • Application Availability Analysis Tool provides a tool for measuring application availability. A log file is maintained to capture application and node startup and outages. The analysis tool reads the log and generates a report including availability metrics.

  • PowerHA Cluster Verification is a tool that ensures that the HACMP configuration is complete and consistent across the cluster, and consistent with the AIX 5L configuration on all nodes. It provides extensive diagnostics to assist the cluster administrator in identifying cluster problems before they cause a failure.

  • Online Planning Worksheets (OLPW) provide a graphical tool to assist in planning a cluster. This information can be saved as documentation of the cluster, or used to generate the actual cluster configuration. The OLPWs can run on any system that supports a Java runtime environment. The OLPW cluster definition can be saved in XML format for easy edits. Cluster snapshot can also be saved to this format. OLPWs allow administrators to recover the planning worksheets for an existing cluster.

  • HAView allows you to monitor PowerHA clusters through the NetView network management platform. The HAView application monitors the clusters using SNMP. PowerHA contains the information about cluster topology and state; HAView displays the configuration and state of the clusters and cluster components through the NetView graphical user interface.

  • HATivoli allows you to monitor the state of an PowerHA cluster and its components through your Tivoli Framework enterprise management system. Using various windows of the Tivoli Desktop, you can monitor the state of the cluster, networks, nodes, resource groups and their locations.

Ease-of-Use

  • Two Node Configuration Assistant

    Simplifies the process of configuring a two-node PowerHA cluster when the individual nodes are already connected to shared disks and networks. This tool is designed to allow users with basic knowledge of PowerHA to quickly and correctly set up a basic two-node cluster containing one nonconcurrent resource group, one application server, all shared volume groups and one service label.

  • Cluster Test Tool

    Tests an PowerHA cluster by generating specific events such as node and network failures, resource group movements and so forth. It can be run unattended, providing a report of the results. This tool is intended to help validate an initial PowerHA configuration prior to its use in a production environment and after configuration changes while the cluster is still out of service.

  • Simplified Resource Group Definition

    All resource groups are defined in terms of explicit policies, in the same way as custom resource groups were configured in the prior release. Existing resource groups are converted on migration.

  • Dependent Resource Groups

    Resource groups can be configured as dependencies for other resource groups. This allows for easier configuration and control in clusters with multi-tier applications, where one application depends on the successful startup of another, and all are required to be kept highly available by PowerHA. Dependencies are cluster-wide; the applications need not be running on the same node.

    Resource groups can be kept together on the same node or the same site. Or, they can always be kept on separate nodes and/or separate sites. This allows the system administrators to plan for and configure more complex environments, where not just one but several inter-dependent applications are kept highly available and are started or being run on specific sets of nodes. The PowerHA publications contain real-life examples of multi-tier application production environments.

    Additionally, resource groups can be automatically distributed at startup time, so that only one is brought on-line on a node This assists in load balancing across the cluster.

  • File Collections

    File collections provide a vitally important function to system administrators, by letting them keep application-configuration information in sync. This feature automatically keeps a specified list of files in sync across the cluster. PowerHA also detects when a file in a file collection is deleted or truncated, and logs a message to inform the cluster administrator. Two predefined PowerHA file collections are installed by default, to ensure that critical PowerHA and system files are kept in sync.

  • Cluster Verification with auto correction

    Cluster Verification in PowerHA checks for a wider variety of issues, and applies more automatic corrections, than in previous releases. The issues that PowerHA now identifies as incorrect could have been misconfigured in the cluster configuration in the previous releases without necessarily breaking the cluster. A list of errors for which PowerHA's verification utility takes corrective actions is included in Chapter 6 of the Administration Guide. The clverify utility can be authorized to correct inconsistencies it detects such as volume group definitions, or missing cluster information in system files.

  • Automatic Cluster Configuration Checking

    Automatically runs the clverify utility at midnight on one cluster node. PowerHA notifies the cluster administrator if problems are detected. This reduces the risk that a configuration change will cause a failure at a later time.

  • Web-based Cluster Management

    A browser-based user interface (WebSMIT) provides consolidated access to the PowerHA SMIT functions for configuration and management, a new interactive cluster status display and links to PowerHA documentation. This is similar to the ASCII SMIT interface, so should be familiar to existing users. Being web based, it can be access from any platform. PowerHA V5 also ensures that WebSMIT, as well as other HACMP utilities that are accessible remotely, are completely secure.

  • Display Cluster Applications

    Provides an application-centric view of the cluster configuration, with applications listed first, and for each of them, the nodes, volume groups and networks associated with that application. ASCII and Web based versions are provided. The web based version supports expanding and collapsing sections of the display; the state of cluster objects is indicated with different colors.

  • Cluster-wide User Password Change

    A new cluster password utility links to the AIX 5L password utility to allow users to change their passwords across the cluster, when authorized by the cluster administrator.

    Additional functionality

  • Recover Planning Worksheets

    The Online Planning Worksheets tool can recover the planning worksheets for an existing cluster. This allows the administrators to get a convenient, readable description of the cluster.

  • Application Monitoring

    Multiple monitors can now be specified for an application, allowing both a custom monitor, and process death monitors to be in place. Additionally, the application can be monitored during startup - that is, during the initial stabilization interval. This is strongly recommended for applications in resource groups on which other resource groups depend.

  • Reset PowerHA Tunables

    Resets the various tunables in the PowerHA configuration to the install time default, restoring the cluster to a known initial state. This assists in troubleshooting cluster problems, and can provide a known starting point for administrators taking over an existing cluster.

  • Security

    Builds on the existing connection authentication in PowerHA and the CtSec facility of RSCT to provide message authentication. This ensures the origination and integrity of a message and prevents spoofing attacks against the PowerHA communications daemons. For further security, message encryption can be specified. Supports MD5, DES, Tripple DES and AES encrypt.

  • Optional PowerHA/XD (Extended Distance) feature

    PowerHA/XD (Extended Distance) option provides automated data backup and disaster recovery across geographically-dispersed clusters, protecting business-critical applications against disasters that affect an entire site. PowerHA/XD provides data mirroring functions (which are not provided in base PowerHA) and drives automatic mirror resynchronization after a site outage.

    This optional feature of PowerHA V5 offers, in a single package, multiple technologies for achieving long distance data mirror, failover, and resynchronization:

    • PowerHA/XD Geographic Logical Volume Manager (GLVM) is a software- based technology for mirroring mission-critical data over standard TCP/IP networks. GLVM supports AIX 5L logical volumes to help keep your business running. PowerHA/XD GLVM exploits IP to enable one or two copies of an AIX 5L logic volume to be located at a remote site separated by unlimited distance, thus allowing for automated failover to that remote site with minimal risk of loss of any data. GLVM uses host-based (LVM) mirroring to replicate data between disk enclosures via a SAN (Storage Area Network).

    • PowerHA/XD support of IBM ESS Metro Mirror (formerly known as PPRC), IBM TotalStorage, and SVC PPRC provides automatic failover of disks that are PPRC pairs and creates a powerful solution for customers on ESS, DS8000, and SVC with PPRC. By automating the management of PPRC, recovery time is minimized after an outage, regardless of whether the clustered environment is local or geographically dispersed. PowerHA/XD in combination with PPRC manages your clustered environment to ensure mirroring of critical data is maintained at all times. PowerHA/XD also supports the ESS Enterprise Remote Copy Management Facility. eRCMF provides significantly enhanced ease of configuration and management for PPRC.

    • PowerHA/XD IP-based mirroring provides the well-known unlimited distance data mirroring of the former IBM HAGEO product. IP-based mirroring allows a cluster of System p computers to be placed in two widely separated geographic locations, each maintaining an exact replica of the application and data. Data synchronization during production, failover, recovery, and restoration is provided. PowerHA/XD is independent of the disk storage used. RAID or mirroring can be used for local protection. PowerHA/XD IP-based mirroring is done at the logical volume layer.

  • Optional PowerHA Smart Assist feature

    Smart Assist builds upon existing DB2, Oracle, and WebSphere availability strategies to deliver even higher levels of availability for your database environment by integrating the power of PowerHA for AIX 5L for monitoring and recovering from failures of system-level services and components. Smart Assist expands PowerHA auto-discovery features to simplify and streamline the configuration process. These auto-discovery features automatically detect components of your database environment and nominate them for configuration into PowerHA; components such as networks (IP and point-to-point), network adapters and devices, volume groups, and file systems.

    PowerHA Smart Assists extend HACMP's power of monitoring different customer applications and integrating PowerHA clusters with these applications. PowerHA Smart Assists also recover system-level services and applications' components from failures, reducing applications downtime.

    For each of these services that you choose to protect in your database environment, Smart Assist will scan your existing database and PowerHA configurations. It will then draw upon its own knowledge base of optimal integrations for the two products, and automatically create the PowerHA configuration necessary to monitor the WebSphere services and recover from failures. The net result is reduced time and effort to configure the highest levels of availability for your WebSphere environment. That means more time for you to focus on other important functions to drive and support your on demand business.

    Smart Assist does not replace PowerHA or your database application, but rather supplements them. You must install and configure your database so that PowerHA Smart Assist can gather the information required to protect your database services. You must install PowerHA on the systems you want to be clustered, and configure your base network topology for the cluster. This enables Smart Assist to determine which systems are available to configure as backup for database services to be protected.

    PowerHA and Smart Assist increase the availability of a database solution by:

    • Monitoring the Deployment Manager and automatically restarting it on backup servers

    • Monitoring third-party products (such as a TDS server) and automatically restarting them on backup servers if they fail

    • Ensuring that all necessary system resources (for example, storage devices and IP addresses) are configured and made available on backup servers in support of application migration.

Operating Environment

Hardware Requirements

PowerHA V5 works with System p and System i servers in a "no-single-point-of-failure" server configuration.

PowerHA V5 supports the System p and System i models that are designed for server applications and which meet the minimum requirements for internal memory, internal disk, and I/O slots. The following System p and System i models and their corresponding upgrades are supported in PowerHA V5.3 and V5.4:

  • PCI desktop systems: Models 140, 150, 170, 240, 260, and 270

  • PCI deskside systems: Models E20, E30, F30, F40, F50, F80, 6F0, and 6F1 (pSeries 620)

    Note: Heart beat network is only supported with serial port 3 or 4 on the Model 6Fx.

  • Entry systems: Models 25S, 250, and 25T

  • Compact server systems: Models C10 and C20

  • Desktop systems: Models 370, 380, 390, 397, and 39H

  • Deskside systems: Models 570, 57F, 580, 58F, 58H, 590, 59H, 591, 595, 7028-6E1 (pSeries 610), 7029-6E3 (pSeries 615), 7025-6F1 (pSeries 620), and 7028-6E4 (pSeries 630)

  • Rack systems: Models 98B, 98E, 98F, 990, 99E, 99F, 99J, 99K, B80, R10, R20, R21, R24, R50, R5U, S70, S7A, H10, H50, H70, H80, M80, 7028-6C1 (pSeries 610), 7029-6C3 (pSeries 615), 7028-6C4 (pSeries 630), 6H0, 6H1, 6M1 (pSeries 660), 7039-651 (pSeries 655), and 7038-6M2 (pSeries 650), including models with PCI-X Expansion Drawer (7311-D10 and 7311-D20)

  • High-end servers: Models 7040-681 (pSeries 690), 7040-671 (pSeries 670), including models with POWER4+ processors, and PCI-X Planar 10 slot I/O drawer (7040-61D #6571)

  • System p5 185, 505, 510, 51Q, 520, 520Q, 550, 550Q, 560Q, and 570

  • PowerHA 5.4.1 & 5.3 now support 575(IH) Svr Model 9125-F21 (features 5797 & 5791 I/O drawers) on AIX 5.3 & 6.1

  • PowerHA 5.4.1 & 5.3 now support 595(HE) Svr Model 9119-FHA (features 5797 & 5791 I/O drawers) on AIX 5.3 & 6.1

  • 575 PowerHA 5.4.1 and 5.3 now supports Coleman Ethernet Adapter model 5717 on AIX 5.3 & 6.1.

  • POWER5 servers: pSeries models 9110-510, 9111-520, 9113-550, 9117-570, 9119-590, and 9119-595; iSeries models 9406-520, 9406-550, 9406-570, 9406-590, and 9406-595. APARs required:
    • PowerHA V5.2: IY58496 (all models)
    • AIX 5L 5.2: IY56554 and IY61014
    • AIX 5L 5.3: IY60930, IY61034, and IY62191; RSCT APAR IY61770

    Notes:

    • PowerHA support for Virtual SCSI and Virtual LAN (VLAN) on POWER5 (IBM eServer p5 and IBM eServer i5) models requires:
      • AIX 5L V5.3 Maintenance Level 5300-002 with APARs IY70082 and IY72974
      • VIO Server V1.1 with VIOS Fixpack 6.2 and iFIX IY71303
      • PowerHA V5.3, or HACMP V5.2 with APAR IY68370, or later, and APAR IY68387

      Additional requirements and specifications are detailed at:

      http://www-03.ibm.com/support/techdocs/atsmastr.nsf/ WebIndex/FLASH10390

    • On all POWER5 servers except 575, 590, and 595, integrated serial ports are not enabled when the HMC ports are connected to an HMC. Either the HMC ports or the integrated serial ports can be used, but not both. Moreover, the integrated serial ports are supported only for modem and async terminal connections. Any other applications using serial ports, including PowerHA, require a separate serial port adapter to be installed in a PCI slot.

    • On IBM eServer i5 hardware, PowerHA will run only in LPARs that are running supported releases of AIX 5L. In addition, I/O (LAN and disk connections) must be directly attached to the LPARs in which PowerHA runs. I/O intended for use with PowerHA is limited to that which is listed in the device sections that follow.

  • Symmetric multiprocessor server systems: Models G30, J30, R30, R3U, G40, J40, R40, R4U, J50, R4U, S70, S7A, S80, and S85 (pSeries 680)

  • SP systems: Models 204, 205, 206, 207, 208, 209, 20A, 2A4, 2A5, 2A7, 2A8, 2A9, 2AA, 304, 305, 306, 307, 308, 309, 30A, 3A4, 3A5, 3A7, 3A8, 3A9, 3AA, 3B4, 3B5, 3B7, 3B8, 3B9, 3BA, 404, 405, 406, 407, 408, 409, 40A, 500, 50H, 550, and 55H, including the 604 High Nodes, 604E High Nodes, the Power2 Super Chip (P2SC) nodes, and the 375 MHz POWER3 SMP Nodes

  • BladeCenters: IBM BladeCenter JS20 blade servers 8842-42U and 8842-4TU, and JS21 blade servers 8844-31U, 51U, e3x and e5x. in a BladeCenter or BladeCenter T chassis (8677-3XU, 8720-1RX, or 8730-1RX), running AIX 5L V5.3 with the TotalStorage DS4000 or ESS Model 800 Disk Subsystem family.

  • PowerHA 5.4.1 supports AIX 5.3 and 6.1 on the IBM Blade Center JS22 (7998-61X).

  • IBM Intellistation Power 285 Workstation 9111-285

Any supported server can be joined with any other supported server in an PowerHA V5 configuration. Models with fewer than three slots can be used in the PowerHA V5 server configuration, but due to slot limitations, a single-point-of-failure is unavoidable in shared-disk or shared-network resources.

PowerHA V5.3 and V5.4 support concurrent access configuration with all supported external storage systems.

Certain non-IBM RAID systems can operate in concurrent I/O access environments. IBM will not accept APARs if the non-IBM RAID offerings do not work properly with PowerHA V5.

The minimum configuration of each machine is highly dependent on the user's database package and other applications.

Actual configuration requirements are highly localized according to the required function and performance needs of individual sites. In configuring a cluster, particular attention must be paid to:

  • Fixed-disk capacity and mirroring (Logical Volume Manager and database)
  • Slot limitations and their effect on creating a single point-of- failure
  • Client access to the cluster
  • Other LAN devices (such as routers and bridges) and their effect on the cluster
  • Replication of I/O adapters and subsystems
  • Replication of power supplies
  • Other network software

Whenever a process takes over resources after a failure, consideration must be given to work partitioning. For example, if node A is expected to take over for failed node B and continue to perform its original duties, A must be configured with enough resources to perform the work of both.

PowerHA V5.3 and V5.4 device support

PowerHA supports client users on a LAN using TCP/IP. HACMP monitors and performs IP address switching for the following TCP/IP-based communications adapters on cluster nodes:

  • Ethernet
  • EtherChannel
  • Token ring
  • FDDI
  • SP Switches
  • ATM
  • ATM LAN Emulation

At this time, the following adapters are supported in the PowerHA V5 environment. Refer to individual hardware announcements for the levels of AIX 5L supported.

Communications adapters

PCI/ISA

  • #1905 4 Gb Single Port Fibre Channel PCI-X 2.0 DDR Adapter
  • #1910 4 Gb Dual Port Fibre Channnel PCI-X 2.0 DDR Adapter
  • #1912 IBM PCI-X DDR Dual Channel Ultra320 LVD SCSI Adapter
  • #1954 4-Port 10/100/100 Base-TX PCI-X Adapter
  • #1977/1957 IBM 2 Gigabit Fibre Channel PCI-X Adapter
  • #1978 IBM Gigabit Ethernet-SX PCI-X Adapter
  • #1979/1959 IBM 10/100/1000 Base-TX Ethernet PCI-X Adapter
  • #1981 IBM 10 Gigabit Ethernet-SR PCI-X Adapter
  • #1982 IBM 10 Gigabit Ethernet-LR PCI-X Adapter
  • #1983/1990 IBM 2-port 10/100/1000 Base-TX Ethernet PCI-X
  • #1984 IBM Dual Port Gigabit Ethernet-SX PCI-X Adapter
  • #1995 IBM 10 Gigabit Ethernet-SR PCI-X 2.0 Adapter
  • #1996 IBM 10 Gigabit Ethernet-LR PCI-X 2.0 Adapter
  • #2920 IBM PCI Token-Ring Adapter
  • #2931 ISA 8-Port Asynchronous Adapter
  • #2932 ISA 8-Port Asynchronous Adapter
  • #2933 ISA 128-Port Asynchronous Controller
  • #2741 PCI FDDI-Fiber Single-Ring Upgrade
  • #2742 PCI FDDI-Fiber Dual-Ring Upgrade
  • #2743 PCI FDDI-Fiber Single-Ring Upgrade
  • #2944 128-Port Asynchronous Controller, PCI bus
  • #2943 8-Port Asynchronous EIA-232/RS-422, PCI bus Adapter
  • #2963 Turboways 155 PCI UPT ATM Adapter
  • #2968 PCI Ethernet 10/100 Adapter
  • #2969 PCI Gigabit Ethernet Adapter
  • #2975 10/100/1000 Base-T Ethernet PCI Adapter
  • #2979 PCI AutoLANStreamer Token-Ring Adapter
  • #2985 PCI Ethernet BNC/RJ-45 Adapter
  • #2986 PCI Ethernet 10/100 Adapter
  • #2987 PCI Ethernet AUI/RJ-45 Adapter
  • #2988 Turboways 155 PCI MMF ATM Adapter
  • #4025 Scalable POWERParallel Switch2 PCI Attachment Adapter (#8397) for SP-attached servers
  • #4951 IBM 4-Port 10/100 Base-Tx Ethernet PCI Adapter
  • #4953 IBM 64-bit/66 MHz PCI ATM 155 UTP Adapter
  • #4957 IBM 64-bit/66 MHz PCI ATM 155 MMF Adapter
  • #4959 Token-Ring PCI Adapter
  • #4961 IBM Universal 4-Port 10/100 Ethernet Adapter
  • #4962 IBM 10/100 Mbps Ethernet PCI Adapter II
  • #5700 IBM Gigabit Ethernet-SX PCI-X Adapter
  • #5701 IBM 10/100/1000 Base-TX Ethernet PCI-X Adapter
  • #5706 IBM 2-Port 10/100/1000 Base-TX Ethernet PCI-X Adapter
  • #5707 IBM 2-Port Gigabit Ethernet-SX PCI-X Adapter
  • #5718/5719 IBM 10 Gigabit -SR/-LR Ethernet PCI-x adapters
  • #5721 IBM 10 Gigabit Ethernet-SR PCI-X 2.0 Adapter
  • #5722 IBM 10 Gigabit Ethernet-LR PCI-X 2.0 Adapter
  • #5723 2-Port Asynchronous EIA-232/RS-422, PCI bus Adapter
  • #5736 PCI-X DDR Dual Channel Ultra320 SCSI Adapter
  • #5740 4-Port 10/100/100 Base-TX PCI-X Adapter
  • #5758 4 Gb Single Port Fibre Channel PCI-X 2.0 DDR Adapter
  • #5759 4 Gb Dual Port Fibre Channel PCI-X 2.0 DDR Adapter
  • #8396 RS/6000 SP System Attachment Adapter
  • #8398 RS/6000 SP Switch2 PCI-X Attachment Adapter

MCA

  • #1904 Fibre Channel Adapter
  • #2723 FDDI-Fiber Dual-Ring Upgrade
  • #2724 FDDI-Fiber Single-Ring Adapter
  • #2725 FDDI-STP Single-Ring Adapter
  • #2726 FDDI-STP Dual-Ring Upgrade
  • #2930 8-Port Async Adapter - EIA-232
  • #2964 10/100 Mbps Ethernet Adapter - UNI
  • #2972 AutoLANStreamer Token-Ring Adapter
  • #2980 Ethernet High-Performance LAN Adapter
  • #2989 Turboways 155 ATM Adapter
  • #2992 Ethernet/FDX 10 Mbps TP/AUI MC Adapter
  • #2993 Ethernet BNC MC Adapter
  • #2994 10/100 Mbps Ethernet Adapter - SMP
  • #4018 High-Performance Switch (HPS) Adapter-2
  • #4020 Scalable POWERParallel Switch Adapter
  • #4025 Scalable POWERParallel Switch2 Adapter
  • #4025 Scalable POWERParallel Switch2 MX2 Adapter (#4026) on SP nodes

GX HCA

  • #1809/10/11 IBM GX Dual-port 4x IB HCA
  • #7820 IBM GX Dual-port 12x IB HCA

Storage adapters

PCI

  • #1913/5737 PCI-X DDR Dual Channel Ultra320 SCSI RAID Adapter
  • #1975/5703 PCI-X Dual Channel Ultra320 SCSI RAID Adapter
  • #5711 PCI-X Dual Channel Ultra320 SCSI RAID Blind Swap Adapter
  • #5710/5712 PCI-X Dual Channel Ultra320 SCSI Adapter
  • #5713/1986 1 Gigabit-TX iSCSI TOE PCI-X adapter (copper connector)
  • #5714/1987 1 Gigabit-SX iSCSI TOE PCI-X adapter (optical connector)
  • #5716 2 Gigabit Fibre Channel PCI-X Adapter
  • #6203 PCI Dual Channel Ultra3 SCSI Adapter
  • #6204 PCI Universal Differential Ultra SCSI Adapter
  • #6205 PCI Dual Channel Ultra2 SCSI Adapter
  • #6206 PCI SCSI-2 Single-Ended Ultra SCSI Adapter
  • #6207 PCI SCSI-2 Differential Ultra SCSI Adapter
  • #6208 PCI SCSI-2 Single-Ended Fast/Wide Adapter
  • #6209 PCI SCSI-2 Differential Fast/Wide Adapter
  • #6215 PCI SSA Adapter
  • #6225 Advanced SerialRAID Adapter
  • #6230 Advanced SerialRAID Plus Adapter (including Fast Write Cache (#6235) with two-initiator only)
  • #6227 Gigabit Fibre Channel Adapter
  • #6228 1-and 2-Gigabit Fibre Channel Adapter for 64-bit PCI Bus
  • #6239 2 Gigabit FC PCI-X Adapter

MCA

  • #2412 Enhanced SCSI-2 Differential Fast/Wide Adapter/A
  • #2415 SCSI-2 Fast/Wide Adapter/A
  • #2416 SCSI-2 Differential Fast/Wide Adapter/A
  • #2420 SCSI-2 Differential High-Performance External I/O Controller
  • #6212 High Performance Subsystem Adapter/A (40/80 Mbps)
  • #6214 SSA 4-Port Adapter
  • #6216 Enhanced SSA 4-Port Adapter
  • #6219 MCA SSA Adapter

For compatibility with subsystems not listed in the following section, refer to the individual hardware announcements.

External storage subsystems

  • IBM 7131 SCSI Multi-Storage Tower Model 105 (supports up to four nodes; no CD-ROMs or tapes can be installed)
  • IBM 7131 SSA Multi-Storage Tower Model 405 (supports up to eight nodes; no CD-ROMs or tapes can be installed)
  • IBM 7133 SSA Disk Subsystem Models 020 and 600 (supports up to eight nodes)
  • IBM 7133 SSA Disk Subsystem Models D40 and T40 in up to 72.8 GB Mode (supports up to eight nodes)
  • IBM 7137 Disk Array Subsystem Models 413, 414, 415, 513, 514, and 515 (supports up to four nodes)
  • IBM 7204 External Disk Drive Models 317, 325, 339, 402, 404, and 418 (supports up to four nodes)
  • IBM 2105 Versatile Storage Server Models B09 and 100 (supports up to four nodes)
  • IBM Enterprise Storage Server Models E10, E20, F10, and F20 (supports up to eight nodes using SCSI and Fibre Channel interfaces via IBM FC/FICON features 3021, 3022, and 3023)
  • IBM 2105 TotalStorage Enterprise Storage Server Model 800
  • IBM TotalStorage DS4100 Storage Server 1724-100 (FAStT100) with DS4000 EXP100 1710-10U Storage Expansion Unit (requires DS4000 Disk Firmware YAR51HW0, and APARs for PowerHA, AIX 5L 5.2 or 5.3, and RSCT; refer to latest service information)
  • IBM TotalStorage DS4200 controller with EXP420 expansion unit and supported drives
  • IBM TotalStorage DS4300 Storage Server 1722-60U (FAStT 600)
  • IBM TotalStorage DS4300 with Turbo feature (1722-60U with #2000 or #2010) (FAStT 600)
  • IBM TotalStorage DS4400 Storage Server 1742-1RU (FAStT700)
  • IBM TotalStorage DS4500 Storage Server 1742-90U (FAStT900)
  • IBM TotalStorage DS4800 1815 with EXP810 Storage Expansion Unit (1812-81A) (requires all latest service)
  • IBM TotalStorage FAStT200 Storage Server 3542-2RU
  • IBM TotalStorage FAStT500 Storage Server 3552-1RU
  • IBM 2102-F10 Fibre Channel RAID Storage Server
  • IBM 2104 Expandable Storage Plus Models DL1, TL1, DU3, and TU3
  • IBM 2104 TotalStorage Expandable Storage Plus 320 Models DS4 and TS4
  • IBM 7031 TotalStorage Expandable Storage Ultra 320 Model D24 and T24
  • IBM 2108 Storage Area Network (SAN) Data Gateway Model G07
  • IBM 2031-016 McData ES-3016 Fibre Channel Product
  • IBM 2031-032 McData ES-3032 Fibre Channel Product
  • IBM 2031-216 McData ES-3216 Fibre Channel Product
  • IBM 2031-232 McData ES-3232 Fibre Channel Product
  • IBM 2032-001 McData ED-5000 Fibre Channel Director
  • IBM 2032-064 McData ED-6064 Fibre Channel Director
  • IBM 2031-224 McData Sphereon 4500 Fibre Channel Switch / SAN24M-1/IBM 2026-224
  • IBM 2034-212 McData Sphereon 4300 Fibre Channel Switch
  • INRANGE FC/9000 Fibre Channel Director Model 2042-001 and Model 2042-128
  • IBM 2109 Models S08, S16, F16, F32, and M12 SAN Fibre Channel Switch
  • IBM 3534 Model F08 SAN Fibre Channel Switch
  • IBM 2145-4F2 TotalStorage SAN Volume Controller with IBM TotalStorage SVC Storage Software V3.1 (SVC Storage Software V2.1 and V3.1 are supported with PowerHA V5.3)
  • 2863-A20 IBM System Storage N3700 dual controller model with Data ONTAP 7.1 software
  • IBM 2864 SystemStorage N5200 models A20 and G20
  • IBM 2865 SystemStorage N5500 models A20 and G20
  • IBM 1750 System Storage DS6000 Model EX2 Expansion Enclosure
  • IBM 1750 System Storage DS6800 Model 522
  • IBM 2107 System Storage DS8000 Turbo Models 931, 932, and 9B2

Tape drive support

  • IBM 3583 Ultrium Scalable Tape Library Model L18, L32, and L72
  • IBM 3584 Ultra Scalable Tape Library Model L32 and D32
  • IBM TotalStorage Enterprise Tape Drive 3590 Model H11
  • IBM Magstar 3590 Tape Drive Model E11 and B11
  • IBM 3581 Ultrium Tape Autoloader Model H17 and L17
  • IBM 3580 Ultrium Tape Drive Model H11 and L11

Router support

  • IBM RS/6000 SP Switch Router 9077-04S

    This router can be used in cluster configurations where the router is used to provide communications to client systems. This router is not supported in the communications path between nodes in an PowerHA cluster.

  • IBM 7139-111 Vicom Systems SLIC Router

  • IBM 7140-160 SAN Controller 160

Rack-mounted storage subsystems

  • IBM 7027 High Capacity Storage Drawer Model HSC (supports up to two nodes; no CD-ROMs or tapes installed)

  • IBM 7027 High Capacity Storage Drawer Model HSD (supports up to four nodes; no CD-ROMs or tapes installed)

The following table shows the SCSI-2 Single-Ended, SCSI-2 Differential, and SSA cabling that PowerHA V5 supports.

 
Maximum number of enclosures Type Adapter Enclosure per bus ----------- ------- --------- ------------- SCSI-2 Differential 6209 7131-105 2 (16-bit) 7137-413 2 7137-414 2 7137-415 2 7137-513 2 7137-514 2 7137-515 2 SCSI Single-Ended 2415 7027-HSC 1 6208  
 
SCSI-2 Differential 2420 7137-413 2 (8-bit) 7137-414 2 7137-415 2 7137-513 2 7137-514 2 7137-515 2  
 
 
SCSI-2 Differential        2416        7027-HSD           1
 (16-bit)                  or 2412     7131-105           2
                                       7137-413           2
                                       7137-414           2
                                       7137-415           2
                                       7137-513           2
                                       7137-514           2
                                       7137-515           2
                                       7204-317          14
                                       7204-325          14
                                       7204-339          14
 
 
SSA 6214 7133-020 96 disks or 6216 7133-600 96 disks 7133-D40 96 disks 7133-T40 96 disks 7131-405 4  

SCSI-2 Differential disk cabling for PowerHA

The cabling configurations in the following tables assume the processors are at the end of the bus (just before each terminator) and the storage devices are connected to the bus between two of the processors.

The first table lists the available y-cables. Y-cables have three legs that are called base, long leg, and short leg. The first table shows what you can connect to each leg. The cables listed under the long leg and short leg columns can be found in the subsequent table under the cable name column, except for the terminator, which is supplied with the y-cable.

Y-cable
feature  Base to   Long leg to                  Length
number   adapter   device cable   Short leg       (m)  Notes
-------  -------   ------------   ------------   ----  ------
 
2114     6209      7131-105       Terminator,     .94  16-bit
                   7137-413         System-to-
                   7137-414         System Cable
                   7137-415
                   7204-339
 
2422     2420      7137-413       Terminator,     .765 8-bit
                   7137-414         System-to-
                   7137-415         System Cable
                   7137-513
                   7137-514
                   7137-515
                   7204-339
 
 
 
2426     2416      7027-HSD       Terminator,    .94   16-bit
         or 2412   7131-105         System-to-
                   7137-413         System Cable
                   7137-414
                   7137-415
                   7137-513
                   7137-514
                   7137-515
                   7204-317
                   7204-325
                   7204-339
 
 
 
                        Cable
    Cable/Device        feature  Length
From            To      number   (m)     Cable name
-------       -------   -------- ------  ---------------
Y-cable       Y-cable   2423     2.5     System-to-System Cable
                                         (8-bit)
Y-cable       Y-cable   2424     0.6     System-to-System Cable
                                         (16-bit)
Y-cable       Y-cable   2425     2.5     System-to-System Cable
                                         (16-bit)
 
Y-cable       7204-317  2845     0.6     7204 16-bit Cable
7204-317      7204-317
Y-cable       7204-325
7204-325      7204-325
Y-cable       7204-329
7204-329      7204-329
 
Y-cable       7204-317  2846     2.5     7204 16-bit Cable
7204-317      7204-317
Y-cable       7204-325
7204-325      7204-325
Y-cable       7204-329
7204-329      7204-329
 
Y-cable       7131-105  2882     1.0     7131-105 Cable
7131-105      7131-105
Y-cable       7131-105           2.5     7131-105 Cable
7131-105      7131-105
Y-cable       7131-105  2885     4.5     7131-105 Cable
7131-105      7131-105
Y-cable       7131-105  2870    12.0     7131-105 Cable
7131-105      7131-105
Y-cable       7131-105  2869    14.0     7131-105 Cable
7131-105      7131-105
Y-cable       7131-105  2868    18.0     7131-105 Cable
7131-105      7131-105
 
                         Cable
    Cable/Device         feature     Length
From            To       number       (m)       Cable Name
----------    --------   -------     ------     --------------
 
Y-cable       7131-105    9158          1.0     7131-105 Cable
Y-cable       7131-105    9132          2.5     7131-105 Cable
Y-cable       7131-105    9161          4.5     7131-105 Cable
Y-cable       7131-105    9146         12.0     7131-105 Cable
Y-cable       7131-105    9145         14.0     7131-105 Cable
Y-cable       7131-105    9144         18.0     7131-105 Cable
 
2415          7027-HSC    3133          3.0     7027-HSC Cable
2415          7027-HSC    3134          6.0     7027-HSC Cable
6208          7027-HSC    3133          3.0     7027-HSC Cable
6208          7027-HSC    3134          6.0     7027-HSC Cable
 
Y-cable       7027-HSD    3137         12.0     7027-HSD Cable
Y-cable       7027-HSD    3138         18.0     7027-HSD Cable
 
 
Y-cable       7137-413    2002          4.0     7137 Cable (8-bit to
              7137-414                          16-bit)
              7137-513
              7137-514
              7137-515
 
Y-cable       7137-413    2014          2.0     7137 Cable (16-bit)
(2426)        7137-414
              7137-415
              7137-513
              7137-514
              7137-515
 
7137-413      7137-413    3001          2.0     7137 System-to-System
7137-414      7137-414                           Cable
7137-415      7137-415
7137-513      7137-513
7137-514      7137-514
7137-515      7137-515
 

SSA disk cabling for PowerHA V5

PowerHA 5.3 and 5.4 support all of the announced SSA cables and Fiber Optics Channel Extenders. Refer to the appropriate system manuals for cabling information.

Other hardware

Other hardware supported in the previous release of PowerHA V5 and still covered under IBM warranty service, remains supported in this release of PowerHA, unless otherwise noted.

Software Requirements

The specific requirements for AIX 5L are:

  • AIX 5L 5.2 Technology Level 8 with RSCT version 2.3.9.2 (APAR IY84921), or later

  • AIX 5L 5.3 Technology Level 4 with RSCT version 2.4.5.1 (APAR IY84920), or later

    Note: PowerHA V5.3 and V5.4 are not supported on AIX 5L V5.1. HACMP V5.2 continues to support AIX 5L V5.1 with the 5100-08 Recommended Maintenance package or later modification levels.

    Note: Refer to the Hardware requirements section for APARs required for POWER5 support.

The RSCT file sets delivered with AIX 5L must be installed at the following minimum levels. They are:

  • AIX 5L V5.2: rsct.compat.basic.hacmp 2.3.9.0, rsct.compat.clients.hacmp 2.3.9.0, rsct.core.rmc 2.3.9.2, and rsct.core.sec 2.3.9.1

  • AIX 5L V5.3: rsct.compat.basic.hacmp 2.4.5.0, rsct.compat.clients.hacmp 2.4.5.0, rsct.core.rmc 2.4.5.2, and rsct.core.sec 2.4.5.1

Each node within a high availability server complex requires the licensed program PowerHA V5 to be installed. Except during the upgrade process from earlier releases of PowerHA V4, it is recommended that all nodes in the PowerHA server complex be at the same AIX 5L operating system level, including PTFs and maintenance upgrades.

Some of the devices supported in PowerHA V5 may require a later release level of the AIX 5L operating system; refer to the specific hardware announcement for the AIX 5L release levels required by the hardware.

The HAView facility requires the installation of Tivoli NetView for AIX 5L (5697-NVW).

To use C-SPOC with VPATH disks, SDD 1.3.1.3, or later, is required.

To use PowerHA Online Planning Worksheets, AIX 5L Java Runtime Environment is required.

PowerHA V5 supports use of AIX 5L V5.2 MPIO for multipath access to disk subsystems.

PowerHA V5 supports use of SDDPCM V2.1.0.8, or later, configured to access the shared disks with the "no-reserve" reserve policy. The shared disks must be defined as being in an Enhanced Concurrent Mode (ECM) volume group. Persistent reserve policy is not supported in an PowerHA environment.

PowerHA/XD requires:

  • PowerHA V5.4 base (cluster.es.server.rte 5.n.0.0) at the same release level (n) as PowerHA/XD.

  • For Metro Mirroring, at a minimum:
    • AIX 5L Java Runtime Environment
    • IBM Storage Subsystem Device Driver, Microcode, and Command Line Interface

    Refer to the PowerHA/XD Release Notes shipped with the product, for details regarding requirements for specific configurations and software levels required for PowerHA/XD Metro Mirroring.

  • For XD IP-based (HAGEO) mirroring: No additional prerequisites

  • For XD GLVM:
    • AIX 5L V5.3 with Technology Level 5

PowerHA Smart Assist requires:

  • PowerHA V5 (5765-F62) base feature at the same release level as HACMP Smart Assist

  • For Smart Assist for DB2: PowerHA V5.4 and either AIX 5L V5.2 with Recommended Maintenance Package 5200-01, or later, or AIX 5L V5.3

  • For Smart Assist for Oracle: RSCT 2.3.6 filesets

  • For Smart Assist for WebSphere: RIBM HTTP Server V6.0

PowerHA Smart Assist supports the following applications or later modification levels of these applications:

  • For new PowerHA V5.4 configurations: WebSphere Application Server V6.0 (in a Network Deployment environment only).

    Note: If migrating from PowerHA V5.3 to V5.4, WebSphere Application Server V5.0 will continue to be supported.

  • PowerHA V5.3:
    • Tivoli Directory Server V5.2, or later
    • Deployment Manager V6.0, or later
    • DB2 Server V8.1 or V82, or later
    • Oracle Application Server 10g (AS10g) Cold Failover Cluster V9.0.4, or later

  • PowerHA V5.4:
    • Tivoli Directory Server V5.2, or later
    • Deployment Manager V6.0, or later
    • DB2 Server V8.1 or V82, or later
    • Oracle Application Server 10g (AS10g) Cold Failover Cluster V9.0.4, or later
or later modification levels of these applications.
Back to topBack to top
 
Planning Information

Customer Responsibilities

Customers who purchase two or more PowerHA licenses may replicate the contents of the following file sets throughout their enterprise:

  • cluster.haview
  • cluster.hativoli
  • cluster.es.client
  • cluster.es.plugins
  • cluster.adt.es
  • cluster.msg.en_US.es
  • cluster.msg.En_US.es
  • cluster.msg.ja_JP.es
  • cluster.msg.Ja_JP.es
  • cluster.man.en_US.es
  • cluster.doc.en_US.es
  • cluster.es.worksheets
  • cluster.es.client.wsm

The customer is responsible for evaluation, selection, and implementation of security features, administrative procedures, and appropriate controls in application systems and communication facilities.

Compatibility

PowerHA V5.4 supports dynamic upgrade from HACMP 5.1, 5.2, and 5.3, and static upgrades from all prior versions:

  • A dynamic upgrade from PowerHA V5.1, V5.2, or V5.3 involves installing PowerHA V5.4 on all nodes in the cluster; however, the Version Compatibility function allows you to upgrade the cluster one node at a time, without taking the entire cluster offline. Configuration data is retained.

  • A static upgrade from PowerHA/6000 V1.2, V2.1, V3.1, V4.1, V4.2, V4.3, V4.4.0, V4.4.1, or V4.5, to PowerHA V5.4 involves reinstalling HACMP on all nodes in the cluster at the same time. This means that at some point, the cluster must be brought down; however, with proper planning the downtime can be minimized. Configuration is not retained for PowerHA releases prior to V4.5.

Note: Although different releases of PowerHA V5 can coexist in a cluster temporarily, the Version Compatibility function is intended to ease migration from prior releases of PowerHA V5 and is not intended to provide long-term compatibility between versions of the product in a cluster.

Limitations

PowerHA provides high availability for the applications and resources executing on the nodes in the PowerHA cluster. HACMP does not provide availability beyond the realm of the defined cluster configuration. A high availability implementation requires that there be no single points of-failures (for example, primary and backup PowerHAs in LPAR partitions within a single server frame).

Specific limitations of PowerHA

  • PowerHA supports clusters of up to 64 resource groups and 256 interfaces across up to 32 AIX 5L/PowerHA images (System p or System i servers, SP nodes, RS/6000 systems, or LPARs).

  • The following networks are not supported:
    • Serial Optical Channel Converter
    • SLIP
    • Fibre Channel Switch
    • 802_ether
    • Virtual IP Address facility of AIX 5L
    • IP V6

  • PowerHA Support for CUoD, CBU, and DLPAR:

    PowerHA can be run in LPARs to which processors or memory are added through CUoD or DLPAR (refer to Hardware Announcement 102-260, dated October 8, 2002) or CBU (refer to Hardware Announcement 103-286, dated October 14, 2003). Care must be taken to ensure that sufficient capacity and appropriate procedures are in place to support these features.

    Adjusting the hardware configuration (adding or removing processors or memory) of an PowerHA node will, under many circumstances, put a significant additional temporary load on that system. This can cause PowerHA to interpret that system as having failed. HACMP does not currently have a supported mechanism that will automatically deal with this in all cases. Customers who want to take advantage of these features should run tests and perform adequate benchmarking of their environments to determine the timing implications under varying workload conditions. IBM is not responsible for failures due to incorrect settings of these values, as these tuning parameters are unique for every customer environment.

  • When installing PowerHA on a system with Trusted Computing Base (TCB) security in place, special considerations are required. PowerHA will modify some system files that are monitored by TCB. This will cause errors to be reported by TCB. The error messages may not identify PowerHA as the origin of the changes to the monitored files. While this does will not affect the operation of PowerHA, TCB, or of the AIX 5L system, customers should verify that the messages are indeed caused only by the installation of PowerHA. If that verification is successful, then no further action is required (the tcbck command should not be used to undo the PowerHA changes). Otherwise, normal security procedures should be followed.

  • The Fast Failure Detection function is supported on all supported disk types except SSA.

  • PowerHA updates SNMP during installation.

Specific limitations of PowerHA/XD

  • Only one Metro Mirror-supported Storage Server is supported at each site.

  • PowerHA/XD support does not include Global Mirror functions of SVC Copy Services.

  • PPRC eRCMF supports up to eight Enterprise Storage Servers, with a maximum of four at each site.

  • Concurrent disk access within PowerHA/XD GLVM is supported only within sites, not between sites.

  • A single PowerHA/XD cluster supports only two sites. A single node can be part of only one PowerHA/XD cluster site.

  • A single PowerHA/XD cluster supports up to eight nodes.

Specific limitations of PowerHA Smart Assist

  • Smart Assist for DB2 cannot be used to configure a cluster in a partitioned (DB2 UDB DPF) environment.

  • When protecting WebSphere Application Servers, only environments using Network Deployment (that is, using Deployment Manager) will be configured into PowerHA by the WebSphere Smart Assist.

  • The default configuration created for the WebSphere components assumes an unloaded standby system is available, with enough resources to assume the workload of the active system.

  • A single standby system may be configured to protect multiple workloads, but is assumed to only effectively execute one workload at a time. The default configuration can be subsequently modified to meet installation-specific requirements.

  • For WebSphere configurations using an IBM HTTP Server (IHS) with application servers, Smart Assist will automatically configure the IHS to failover with the application server. This will require the administrator to place the IHS data on a shareable volume group accessible by both nodes.

  • Because of the new common Smart Assist infrastructure, there is no migration path from the PowerHA V5.3 Oracle Smart Assist to the V5.4 Oracle Smart Assist. PowerHA resource groups and resources constructed with V5.3 Oracle Smart Assist are migrated, but you cannot use V5.4 to manage V5.3 Oracle Smart Assist instances.

Environmental conditions that affect the use of PowerHA

  • The time PowerHA takes to recover from a failure depends, in part, on the amount of time it takes for AIX 5L to detect the failure. With the default network detections setting, some token-ring failures may require more than 60 seconds to be detected.

  • PowerHA does not support attached terminals using Local Area Transport-B protocols.

  • Cluster nodes must be within the cable-length limitations of the shared disks, so the physical distance between nodes is limited by the total cable length of the shared disk cables.

  • Transparency of a failure and the subsequent recovery to external users or clients is dependent on the PowerHA configuration, the client system, and application protocol design.

  • The AIX 5L Journaled File System (JFS and JFS2) does not support concurrent access from multiple nodes; therefore, storage accessed in a concurrent configuration must be in raw logical volumes or through GPFS.

  • For systems running NetView or other programs that catch SNMP traps, refer to PowerHA documentation for details on the use of the clinfo daemon with such programs.

  • National Language Support is fully enabled in PowerHA for AIX 5L; message translation is provided for Japanese, but other system components such as RSCT produce messages in English only.

Performance Considerations

PowerHA 5.4.0 cluster performance can be measured and reported in many ways. In a mutual takeover/partitioned workload cluster environment, the user's applications and data are spread across two to 32 nodes in a cluster. Data management and application management can be placed under the control of a single node for efficiency, or on each node for performance. In this partitioned environment, minimal interaction between nodes in a cluster derives high efficiency in each node. When a failover occurs and the backup node takes over for a failed node, performance is degraded for that period of time when the node is down.

In data sharing, LAN and file system overhead and data contention can reduce processor efficiency. Where data must be shared, the concurrent access configuration can be utilized, but less efficiently than a single system because distributed locks must be maintained between cluster nodes and processors.

Conversion

PowerHA 5.4 includes conversion utilities to help you convert your configuration from earlier releases of PowerHA without installing each intervening version.

All of these conversions can be done while your cluster is not operating; for information regarding node conversion while maintaining cluster operation, refer to the Compatibility section.

The following conversions are provided for converting existing configurations of PowerHA V5 to V5.4:

  • PowerHA for AIX 5L, V5.1
  • PowerHA for AIX 5L, V5.2
  • PowerHA for AIX 5L, V5.3

PowerHA 5.4 includes conversion utilities to help you migrate your configuration between products in the PowerHA family.
Back to topBack to top
 

Publications

License Information will display automatically when PowerHA V5 is installed.

The following publications are supplied on CD-ROM with the basic machine-readable material.

  • PowerHA for AIX 5L: Concepts and Facilities Guide (SC23-4864)
  • PowerHA for AIX 5L: Planning Guide (SC23-4861)
  • PowerHA for AIX 5L: Installation Guide (SC23-5209)
  • PowerHA for AIX 5L: Administration Guide (SC23-4862)
  • PowerHA for AIX 5L: Troubleshooting Guide (SC23-5177)
  • PowerHA for AIX 5L: Programming Client Applications (SC23-4865)
  • PowerHA for AIX 5L: Master Glossary (SC23-4867)

For PowerHA/XD GLVM customers:

  • PowerHA/XD GLVM Planning and Administration Guide (SA23-1338)

For PowerHA/XD / Metro Mirror customers:

  • PowerHA/XD: Metro Mirror Planning and Administation Guide (SC23-4863)

For PowerHA/XD / IP customers:

  • PowerHA/XD for AIX 5L for HAGEO Technology Concepts and Facilities (SA22-7955)
  • PowerHA/XD for AIX 5L for HAGEO Technology Planning and Installation Guide (SC23-4862)

For PowerHA Smart Assist customers:

  • PowerHA for AIX 5L: Smart Assist for WebSphere User's Guide (SC23-4877)
  • PowerHA for AIX 5L: Smart Assist for Oracle (SC23-5178)
  • PowerHA for AIX 5L: Smart Assist for DB2 (SC23-5179)
  • PowerHA for AIX 5L: Smart Assist Developer's Guide (SC23-5210)

Back to topBack to top
 
Security, Auditability, and Control

This program uses the security and auditability features of AIX 5L V5.2 and V5.3 for servers.

Trademarks

(R), (TM), * Trademark or registered trademark of International Business Machines Corporation.

** Company, product, or service name may be a trademark or service mark of others.
 © IBM Corporation 2010.
Back to topBack to top