IBM GDPS V3.6: Enterprise-wide infrastructure availability and disaster recovery

IBM United States Marketing Announcement 309-002
February 24, 2009

 ENUS309-002.PDF (122KB)

Table of contents   Document options  
Overview Overview Product positioning Product positioning
Planned availability date Planned availability date Reference information Reference information
Description Description Order now Order now
Printable version Printable version

Top rule
Bottom rule

The GDPS® solution gives you added peace of mind knowing your system infrastructure availability and disaster recovery (DR) solution is always ready. Many companies practice disaster recovery testing, but a real disaster is not a test. Procedures may not be current or key people may be unavailable. The staff may have other priorities such as their houses and families, they may not be able to get to or communicate with the DR site, or they may not survive the disaster. Recovery from an actual disaster may take significantly longer than planned, which may affect bottom-line revenue for the business. GDPS helps remove these concerns by managing and monitoring properly configured disk remote copy, combined with features designed to automate the recovery actions. In addition, GDPS provides automation capabilities for planned data center activities. This helps to remove people as a single point of failure.

GDPS V3.6 enhances IBM's current industry-leading Continuous Availability and Disaster Recovery (CA/DR) automation by extending the support for managing heterogeneous platforms, and continuing to exploit the advanced data replication technologies of IBM® DS8000™ Storage to provide faster backup/restore solutions and improved ease of use. These benefits can be achieved through new functions such as:

  • Improved coordinated disaster recovery across heterogeneous platforms by supporting:
    • Distributed Cluster Management (DCM) support for GDPS/Global Mirror with Veritas Cluster Server
    • GDPS/PPRC Multiplatform Resiliency for System z® (xDR) support for LSS sharing between z/VM® LPARs
  • Increased availability with:
    • Reduced-impact Metro Mirror initial copy and resynchronization for GDPS/PPPRC and GDPS/PPRC HyperSwap™ Manager configurations
    • Enhanced timer support for GDPS/PPRC
    • GDPS/PPRC Multiplatform Resiliency for System z (xDR) support for two controlling systems (K-sys)
    • Remote Pair FlashCopy® support with GDPS/PPRC and GDPS/PPRC HyperSwap Manager configurations
  • Simplified System Management with:
    • New GDPS Health Checks
    • Introduction of Query Services

IBM is announcing the following Statements of Direction:

  • Improved scalability with:
    • GDPS/PPRC Alternate Subchannel Sets
  • Increased disaster recovery protection with:
    • GDPS/MGM Incremental Resynhronization enhancement
  • Improved automation with:
    • GDPS/PPRC timer support extensions
  • Increased availability with:
    • New HyperSwap trigger for z/VM

GDPS V3.6 is planned for general availability on March 31, 2009.

More detailed information on GDPS service offerings is available at:

Back to topBack to top
Top rule
Planned availability date
Bottom rule

March 31, 2009:

  • RCMF/PPRC V3.6
  • GDPS/PPRC V3.6
  • GDPS/PPRC HyperSwap Manager V3.6
  • RCMF/XRC V3.6
  • GDPS/XRC V3.6
  • GDPS/Global Mirror V3.6
  • GDPS Metro/Global Mirror V3.6
  • GDPS Metro/z/OS Global Mirror V3.6

Back to topBack to top
Top rule
Bottom rule

GDPS V3.6 has been enhanced to offer:

  • Improved coordinated disaster recovery across heterogeneous platforms by supporting:
    • Distributed Cluster Management (DCM) support for GDPS/GM
    • GDPS/PPRC Multiplatform Resiliency for System z (xDR) support for LSS sharing between z/VM LPARs
  • Increased availability with:
    • Reduced impact Metro Mirror initial copy and resynchronization
    • GDPS/PPRC timer support
    • GDPS/PPRC Multiplatform Resiliency for System z (xDR) support for two K-sys support
    • Remote Pair FlashCopy support
  • Simplified system management with:
    • New GDPS Health Checks
    • Query Services

The following are items not previously announced that were delivered as GDPS V3.5 SPEs. They are included with the GDPS V3.6 base code.

  • GDPS/PPRC Multiplatform Resiliency for System z (xDR) extensions for two K-sys support
  • GDPS/PPRC Multiplatform Resiliency for System z (xDR) extensions for LSS sharing between up to four z/VM LPARs

The following are items previously announced in "IBM GDPS V3.5: Enterprise-wide infrastructure availability," Marketing Announcement 308-001, dated February 26, 2008, that were delivered as GDPS V3.5 SPEs. They are included with the GDPS V3.6 base code.

  • z/OS® Metro/Global Mirror Incremental Resynchronization

    This helps improve the recovery capability of GDPS/MzGM by allowing the z/OS Global Mirror session to be quickly reestablished after a HyperSwap between the primary and secondary site.

  • CBU and On/Off CoD Enhancements

    System Management enhancements are available for the Capacity Backup (CBU) function that no longer requires a GDPS customer to use the Remote Service Facility (RSF) to obtain the authentication keyword.

    New keywords have been added to support activation and deactivation of the On/Off CoD function.

  • DCM support for Tivoli® System Automation Application Manager

    This new capability is designed to provide coordinated recovery and failover between an IBM System z running GDPS and non-System z platforms managed with System Automation Application Manager

  • Support for Tivoli Business Continuity Process Manager

    This is designed to improve system management by helping to manage the workflow of human interactions.

Heterogeneous disaster recovery coordination

Distributed Cluster Management (DCM) for GDPS/GM

GDPS/GM V3.6 has been enhanced to provide DCM support for VCS clusters. Distributed Cluster Management (DCM) is a GDPS capability introduced in GDPS V3.5 which allows the management and coordination of planned and unplanned outages across distributed servers which may be clustered using clustering solutions, and the System z workloads that GDPS is responsible for. DCM support was initially introduced in GDPS/PPRC V3.5 and GDPS/XRC V3.5. The support in GDPS V3.5 was for distributed clusters managed by Veritas Cluster Server (VCS) and IBM Tivoli System Automation Application Manager (SA AppMan).

As with GDPS/PPRC and GDPS/XRC, the GDPS/GM support provides the ability to perform a planned or unplanned site switch for any or all of the VCS clusters, depending upon the GDPS policy. When a planned or unplanned site switch is performed, all VCS services groups on each VCS cluster will be moved. Some of the capabilities of DCM support for GDPS/GM include:

  • Monitoring

    GDPS monitors DCM-related resources and generates SDF alerts for resources in a not "normal" state.

  • Manual operations

    The GDPS panels include an option to query and view the status of DCM resources, and perform planned operations on individual DCM resource.

  • Automation

    GDPS issues the GEO112E/GEO113A takeover prompt and suggests possible scripts to run when it detects various failures associated with DCM resources.

  • Scripting

    The scripting capability in GDPS provides workflow integration for actions taken on distributed servers and System z servers in the event of a planned or unplanned event.

With the DCM support, two VCS clusters, one per site, can be coupled with the Global Cluster Option (GCO) with the GDPS/GM environment spread across the same two sites. GDPS/GM is the controller of the GCO VCS clusters, managing the System z data while each GCO VCS cluster manages its remote mirror or replication.

GDPS/GM DCM support is expected to be available on April 24, 2009.

More information on the Distributed Cluster Manager support can be found in "GDPS V3.5: Enterprise-wide infrastructure availability," Marketing Announcement 308-001, dated February 26, 2008.

LSS sharing for z/VM

GDPS/PPRC Multiplatform Resiliency for System z now helps to allow disk sharing between z/VM images by allowing volumes within an LSS to be shared between multiple z/VM LPARs. If there are multiple z/VM LPARs managed by GDPS, one can now for example assign 40 devices to one z/VM, 100 to a second z/VM, and 116 to a third z/VM.

A Logical Subsystem (LSS) is a structure internal to a disk subsystem. Each LSS supports up to 256 logical devices, with each logical device being mapped to a logical disk volume. Once disks of more than one z/VM reside in the same LSS, then additionally the z/VM LPARs can be configured to share individual devices with each other. This is done by using the z/VM Cross System Extension (CSE) function. With GDPS V3.6, sites running z/VM and CSE can now use GDPS/PPRC Multiplatform Resiliency to help manage their disaster recovery for the z/VM environment.

Support is available up to four z/VM LPARs sharing an LSS with GDPS V3.5 with SPE PK71525 enabled for CKD and ECKD™ formatted disk. Support is available for greater than four z/VM LPARs with GDPS V3.6.

GPDS/PPRC Multiplatform Resiliency for System z requires IBM Tivoli System Automation for Multiplatforms (TSA MP).

Sharing an LSS containing Metro Mirrored disk between z/VM and z/OS is not supported.

CSE still has a limit of four z/VM LPARs sharing a single device within a logical subsystem. For more information on CSE, see CP Planning and Administration (SC24-6083) at


Reduced impact Metro Mirror initial copy and resynchronization

GDPS V3.6 allows reduced impact initial copies and resynchronization of Metro Mirror volumes. This reduces the exposure window where the environment is without Freeze or HyperSwap protection to protect the consistency group and provide near-continuous availability.

In a GDPS/PPRC or GDPS/PPRC HyperSwap Manager environment, some customers defer the initial copy and/or resynchronization of the secondary disk to a period of low workload activity to mitigate any possible performance impact on production workloads. They may also pace the number of volumes that are concurrently initial copied or resynchronized.

New with GDPS V3.6, the GDPS/PPRC DASD START SECONDARY script statement and the GDPS/PPRC HM HYPERSW RESTORE commands are extended to initiate the initial copy and resynchronization using asynchronous Global Copy (previously called PPRC-XD). GDPS then monitors progress of the copy operation. When the volumes are near full duplex state, GDPS will convert the replication from asynchronous Global Copy to synchronous Metro Mirror copy. Initial copy or resynchronization using Global Copy is expected to reduce the performance impact for production workload, allowing customers to resynchronize mirroring during periods of high production workload.

This solution requires a disk subsystem that supports Global Copy. It is applicable for GDPS/PPRC and GDPS/PPRC HM, and addresses customer requirement PTDB1004-738.

GDPS/PPRC timer support

Enhancements in GDPS V3.6 and z/OS V1.11 help improve GDPS recovery times for events that impact the primary time source for the sysplex, whether the time source is Server Time Protocol (STP) or External Time Reference (ETR) based. These enhancements allow the GDPS controlling system (K-sys) to continue processing, even when the server it is running on loses its time source and becomes unsynchronized. The K-sys is will be able to complete any Freeze or HyperSwap processing it may have started instead of being in a disabled WTOR state. Normally, a loss of synchronization with the sysplex timing source will generate a disabled console WTOR that suspends all processing on the LPAR, until a response is made to the WTOR.

In addition, since the K-sys is operational, it can be used to help in problem determination and situation analysis during the outage thus reducing further the recovery time needed to restart applications.

The K-sys is required to perform GDPS automation in the event of a failure. Actions may include:

  • Performing the Freeze processing to guarantee secondary data consistency
  • Coordinating HyperSwap processing
  • Executing a takeover script
  • Aiding with situation analysis

Since the K-system only needs to run with a degree of time synchronization that allows it to correctly participate in heartbeat processing with respect to the other systems in the sysplex, this system should be able to run unsynchronized for a period of time using the local TOD clock of the server, instead of generating a WTOR.

IBM plans to roll support back to z/OS V1.9 PTFs for APAR OA26085.

Two K-systems support

GDPS/PPRC Multiplatform Resiliency for System z (also known as xDR) allows GDPS to manage Linux® guests on z/VM and native Linux on System z LPARs with the same quality of service as for z/OS LPARs. This includes support for HyperSwap for Linux data, and the ability of GDPS to manage Linux and z/VM LPARs, and IPL the Linux and z/VM operating systems. This provides a coordinated CA/DR solution for both z/OS and Linux on System z for multi-tiered architectures. With GDPS V3.6, two controlling system (K-sys) LPARs can now be defined, both with awareness of the non-z/OS environment.

The two K-sys support removes the various exposures associated with running with a single K-sys and provides the same level of protection for GDPS/PPRC Multiplatform Resiliency with two K-sys that z/OS enjoys with two K-sys. Prior to GDPS V3.6, GDPS/PPRC Multiplatform Resiliency could communicate to only one GDPS K-system. This resulted in some availability exposures. For example, if disk is HyperSwapped to Site2, the primary disks and the K-sys are on the same site. This environment is not protected against a site failure. Also, if the K-sys disk and the Metro Mirrored disk are on the same disk subsystem and there is a disk failure, the K-sys can fail and there is no HyperSwap protection.

Two K-sys support for GDPS/PPRC Multiplatform Resiliency for System z requires IBM Tivoli System Automation for Multiplatforms (TSA MP) Version 3.1 Fixpack 1 (Version

This support is available with GDPS V3.5 with SPE PK70177 and PK70178.

Remote Pair FlashCopy

Remote Pair FlashCopy provides the capability to allow a FlashCopy relationship where the FlashCopy target device is a Metro Mirror primary device. This can significantly reduce the recoverability time that exists when a FlashCopy background copy and Metro Mirror Resync are in progress.

GDPS allows Metro Mirror primary volumes as targets for FlashCopy. This can be used to ensure that a point-in-time backup taken via FlashCopy is mirrored to the recovery site.

DS8000 storage servers provided a function to allow a FlashCopy target to be a Metro Mirror primary device. In order to ensure the integrity of the mirror, this required the Metro Mirror pair to go into a duplex pending state while the tracks associated with the FlashCopy relationship were copied to the Metro Mirror secondary device. While this provided a valuable function to many customers, for others, any time that the Metro Mirror relationships in their environment are in a state other than full duplex is viewed as a loss of their mirror. If many FlashCopy operations are performed throughout the day, as is often the case with dataset-level copy operations where a FlashCopy license is present, a situation may occur in which the Metro Mirror remote site is rarely a true mirror of the local or production site.

The solution for this problem, to preserve the mirror, is to also propagate the FlashCopy command, when issued at the local site, to the remote site, if the proper configuration exists. Keeping the disk in full duplexed state has many advantages such as:

  • Keeping the environment HyperSwap ready
  • Keeping the environment Freeze ready
  • Allowing the use of many tools and products to take advantage of FlashCopy, including DB2®, DFSMS™, and others

This function is supported by GDPS/PPRC and GDPS/PPRC HyperSwap Manager and requires DS8000 at R4.2 microcode level. The DS8000 support is expected to be available on April 24, 2009. For more information about the new DS8000 features, refer to "IBM System Storage™ DS8000 series (Machine type 2107) delivers new scalability, and business continuity capabilities," Hardware Announcement 109-119, dated February 10, 2009.

This function addresses multiple customer requirements.

Simplified System Management

New Health Checks

The objective of the Health Checker for z/OS is to inform customers when their configurations differ from IBM-provided best practice values. Three GDPS Health Checks have been delivered with GDPS 3.4:


These are described in "IBM GDPS V3.4: Enterprise-wide disaster recovery," Marketing Announcement 307-045, dated March 13, 2007.

Three new GDPS Health Checks are being delivered with GDPS 3.6:


GDPS_Check_CONSOLE checks a number of settings related to the console definitions to ensure they meet the GDPS recommendations, any existing IBM-provided best practice values, and any customer-provided overrides. This check examines a number of console-related attributes including RMAX, LOGLIM, and MLIM. It also checks if the Message Flood Automation function is enabled.

GDPS_Check_K_MAXSYS verifies that MAXSYS in the sysplex CDS is specified with a value of 9 or greater.

When a WTOR is issued, if the maximum number of LPARs in the Parallel Sysplex® is eight or less, based upon the MAXSYS parameter in the sysplex CDS format utility, the CONSOLE address space will get a reply-ID from the sysplex CDS. However, if the number of LPARs is nine or more, CONSOLE will allocate a pool of reply-IDs to each system to minimize accesses to the sysplex CDS, use this pool to satisfy reply-ID requests, and only access the sysplex CDS if the pool is exhausted.

GDPS_Check_SYSPLEX_CDS checks that the sysplex Couple Data Set recommendations are respected. This includes:

  • Primary, alternate, and spare CDSs are split across sites
  • CDS not allocated on mirrored devices
  • Primary LOGR CDS exception
  • CDS are cataloged on all systems
  • Multiple sets of spares allocated/cataloged
  • Check for no single point of failure (SPOF) by having a possible alternate CDS defined on each site
Query Services

GDPS V3.6 introduces a new interface to allow queries on GDPS monitored resources. This can help simplify problem determination. The query service is invoked through a REXX™ interface such as System Automation for z/OS.

The information returned contains the following:

  • MONITORS returns Monitor statistics.
  • HYPERSWAP (or HS) returns HyperSwap related information.
  • SYSTEMS returns information about the systems within this GDPSPlex.
  • CPC returns the list of servers being monitored. This option is valid with GDPS/PPRC only.
  • DASD returns information on primary and secondary disk and SSIDs.
  • ALL returns all the information.

GDPS Query Services meets several customer requirements.

Statements of Direction

IBM plans to make available the following functions:

Alternate Subchannel Sets for Metro Mirror Secondaries

Using Metro Mirror, z/OS can define up approximately 32K devices (disk volumes, and so on), each using a subchannel. GDPS/PPRC and GDPS/PPRC HyperSwap Manager are planning to support the definition of Metro Mirror secondary disk devices using subchannels in an alternate subchannel set. This can allow you to define approximately 64K pairs of devices in a Metro Mirror configuration. Using alternate subchannel sets can help provide scalability for larger disk configurations.

Support will require z/OS V1.10 and is available for z/OS V1.9 with the PTF for APAR OA24142.

GDPS/MGM Incremental Resynchronization enhancement

GDPS/MGM is planned to be enhanced to require only changed tracks of data to be copied from the primary site to the intermediate site after a planned or unplanned outage of the intermediate site. This potentially reduces the disaster recovery exposure from hours down to minutes.

GDPS/PPRC timer support extensions

GDPS plans to continue improving the timer support with the following support for STP in a GDPS/PPRC environment:

  • Perform planned or unplanned STP-only CTN re-configurations.
  • Respond to disabled console WTORs generated by z/OS when loss of synchronization is detected.

This support can help remove operator intervention if there is a loss of the primary time source.

New HyperSwap Trigger for z/VM

In GDPS V3.4, an I/O timing trigger was introduced for HyperSwap. GDPS triggers off of the z/OS I/O Timing facility alert indicating there is a problem getting I/O back from a control unit. With GDPS/PPRC Multiplatform Resiliency for System z, this capability is extended for z/VM as well as z/VM guests, including Linux guests, when I/O response time objectives are not met. This function provides the equivalence of IOT for z/VM, providing protection for the z/VM and its guests.


GDPS training and skills enablement is provided by the GDPS installation team as the GDPS solution is being planned, installed, and customized. In addition, a two-day GDPS Technical Consultation Workshop service offering is available for customers. This workshop is an excellent introduction to all the GDPS solutions and technology that you may be interested in using for your environment. For more information or to schedule a GDPS TCW, send a note to

IBM Global Services, Learning Services, education supports many related IBM offerings.


For descriptions of courses worldwide, go to

Questions? Contact 800-IBM-TEACH (426-8322).

The following worldwide courses are available for classroom delivery:

  • GDPS/PPRC Concepts and Implementation (GDPS1AFR)
    This workshop is restricted to employees of enterprises with an existing GDPS license.
  • GDPS/XRC Concepts and Implementation (GDPS2)
    This workshop is restricted to employees of enterprises with an existing GDPS license.

Coexistence policy

GDPS gives you compatibility and flexibility as you migrate systems in a multisystem configuration by allowing two releases of GDPS to coexist. Coexistence allows systems within a multisystem configuration to be upgraded to a new release level of GDPS one system at a time. This is contingent on the fact that the release you are migrating to can coexist with the lowest release running in your multisystem configuration. GDPS supports an (n,n-1) coexistence policy, allowing a GDPS release to coexist with the previous one in the same sysplex.

Service support policy

GDPS provides three releases of defect support. For example, IBM plans to support GDPS V3.4 through GDPS V3.6. When the next release of GDPS after V3.6 becomes generally available, support will be dropped for GDPS V3.4. This provides time for customers to migrate to current GDPS versions while still being in a supported configuration. IBM, at its sole discretion, may choose to leave a release supported for more than three releases.

Note: These statements represent the current intention of IBM. IBM reserves the right to change or alter the service support policy in the future or to exclude certain releases beyond those stated. IBM development plans are subject to change or withdrawal without further notice. Any reliance on this statement of direction is at the relying party's sole risk and does not create any liability or obligation for IBM.

The following table shows the GDPS and RCMF releases and when their planned general availability (GA) and end of service dates will be.

Release             GA             End of service
GDPS V3.3           1/25/06        March, 2009 (1)
GDPS V3.4           3/30/07        March, 2010 (1)
GDPS V3.5           3/31/08        March, 2011 (1)
GDPS V3.6           3/31/09        March, 2012 (1)
GDPS V3.6+1 (2)     March, 2010    March, 2013 (1)
GDPS V3.6+2 (2)     March, 2011    March, 2014 (1)
GDPS V3.6+3 (2)     March, 2012    March, 2015 (1)
(1) End of service dates are based upon the current intentions of IBM.
(2) GDPS levels beyond GDPS V3.6 represent current intentions of IBM.

End of service dates by release can be found at

This coexistence and service support policy applies for all configurations.

It is very important that you order the required GDPS release you need for migration and coexistence while it is still available.

GDPS availability

GDPS V3.6 is planned for availability March 31, 2009. GDPS is designed to work in conjunction with the z10 BC, z10 EC, z9™ BC, z9 EC, z990, z890, z900, and z800 servers. For a complete list of other supported hardware platforms and software prerequisites, refer to the GDPS Web site

Accessibility by people with disabilities

A U.S. Section 508 Voluntary Product Accessibility Template (VPAT) containing details on accessibility compliance can be requested at

Business Partner information

If you are a Direct Reseller - System Reseller acquiring products from IBM, you may link directly to Business Partner information for this announcement. A PartnerWorld® ID and password are required (use IBM ID).

Back to topBack to top
Top rule
Product positioning
Bottom rule

The GDPS solution suite includes many different service offerings designed to meet different user requirements:


    Remote Copy Management Facility (RCMF) provides management of the remote copy environment and disk configuration from a central point of control. The RCMF/PPRC offering can be used to manage a PPRC (Metro Mirror) remote copy environment.


    RCMF/XRC is a remote copy management offering which can be used to manage a single SDM XRC (z/OS Global Mirror) remote copy environment. RCMF/XRC requires only Tivoli NetView®. It does not require Tivoli System Automation for z/OS.

  • GDPS/PPRC HyperSwap Manager

    GDPS/PPRC HyperSwap Manager provides either a single-site near-continuous availability solution or an entry-level multisite disaster recovery solution at a cost-effective price. GDPS/PPRC HyperSwap Manager is designed to allow clients to increase availability and provide applications with continuous access to data. Today, GDPS/PPRC HyperSwap Manager appeals to System z users who seek continuous availability and extremely fast recovery for disk data.

    Within a single site, or multiple sites, GDPS/PPRC HyperSwap Manager extends Parallel Sysplex availability to disk subsystems by masking planned and unplanned disk outages caused by disk maintenance and disk failures. It also provides management of the data replication environment and automates switching between the two copies of the data without causing an application outage, therefore providing near-continuous access to data.

    The GDPS/PPRC HyperSwap Manager solution is a subset of the full GDPS/PPRC solution, designed to provide a very affordable entry point to the full family of GDPS/PPRC offerings. It features specially priced limited-function Tivoli System Automation and NetView software products, thus satisfying the GDPS software automation prerequisites with a lower price and a cost-effective entry point to the GDPS family of offerings. Users who already have the full-function Tivoli System Automation and NetView software products may continue to use them as the prerequisites for GDPS/PPRC HyperSwap Manager.

    A client can migrate from a GDPS/PPRC HyperSwap Manager implementation to the full-function GDPS/PPRC capability as business requirements demand shorter recovery time objectives. The initial investment in GDPS/PPRC HyperSwap Manager is protected when you choose to move to full-function GDPS/PPRC by leveraging the existing GDPS/PPRC HyperSwap Manager implementation and skills.


    GDPS/PPRC complements a multisite Parallel Sysplex implementation by providing a single, automated solution to dynamically manage storage disk and tape subsystem mirroring, processors, and network resources. It is designed to help a business attain continuous availability and near-transparent business continuity (disaster recovery) with data consistency and no or minimal data loss. GDPS/PPRC is designed to minimize and potentially eliminate the impact of any failure, including disasters, or a planned outage.

    GDPS/PPRC is a full-function offering that includes the capabilities of GDPS/PPRC HM. It is designed to provide an automated comprehensive end-to-end solution to dynamically manage storage system mirroring, processors, and network resources for planned and unplanned events that could interrupt continued IT business operations.

    GDPS/PPRC control code can also manage open-system LUNs. This provides data consistency capability across z/OS and non-z/OS data with cross-platform Freeze. GDPS will provide one data consistency group for CKD data (z/OS) and one data consistency group for Fixed Block data (open system) or combine all volumes and LUNs into one consistency group for cross-platform data consistency.

    GDPS/PPRC control code also has the "GDPS/PPRC Multiplatform Resiliency for System z." In addition to the ability to provide data consistency, GDPS provides HyperSwap capability for Linux guests under z/VM and Linux running directly on a System z LPAR after a disk subsystem failure, and GDPS automation to manage the restart of the Linux guests and LPARs on the disaster recovery site after a site failure.

    The GDPS/PPRC offering is a world-class solution built on the z/OS platform and yet can manage a heterogeneous environment.

    GDPS/PPRC control code is designed to provide the ability to perform a controlled site switch for both planned and unplanned site outages, with no or minimal data loss, to help maintain data integrity across multiple volumes and storage subsystems and the ability to perform a normal Data Base Management System (DBMS) restart - not DBMS recovery - in the second site. GDPS/PPRC is application-independent and therefore is expected to cover your complete application environment.


    Based upon IBM System Storage z/OS Global Mirror (eXtended Remote Copy, or XRC), GDPS/XRC control code is a combined hardware and z/OS software asynchronous remote-copy solution. Consistency of the data is maintained via the Consistency Group function within the System Data Mover (SDM). GDPS/XRC includes automation to manage remote copy pairs and automates the process of recovering the production environment with limited manual intervention, including invocation of Capacity Backup (CBU), which can provide significant value in helping to reduce the duration of the recovery window and require less operator interaction. GDPS/XRC attributes include:

    • Disaster recovery solution
    • RTO between an hour to two hours
    • RPO in minutes or less
    • Protection against localized or regional disasters, depending on the distance between the application site and the disaster recovery site (distance between sites is unlimited)
    • Minimal performance impact

    GDPS/XRC is well suited for large System z workloads and can be used for business continuance solutions, workload movement, and data migration.

    Because of the asynchronous nature of z/OS Global Mirror, it is possible to have the secondary disk at greater distances than would be acceptable for Metro Mirror (synchronous PPRC). Channel extender technology can be used to place the secondary disk thousands of kilometers away.

    In some cases an asynchronous disaster recovery solution is more desirable than one that uses synchronous technology. Sometimes applications are too sensitive and cannot tolerate the additional latency incurred when using synchronous copy technology.

    GDPS/XRC supports Linux running on System z. If your Linux on System z distribution supports time stamping of writes, GDPS can manage the XRC of Linux data. In the event of a primary site disaster (or planned site switch), GDPS/XRC can automate the recovery of Linux data and can restart Linux systems at the recovery site by booting them from the copied XRC disk.

  • GDPS/Global Mirror

    GDPS/Global Mirror offers a multisite, comprehensive end-to-end disaster recovery solution for your IBM z/OS and non-z/OS data.

    IBM GDPS/Global Mirror control code can help simplify data replication across any number of System z and/or non-System z servers to a remote site that can be at virtually any distance from the primary site. This can help provide rapid recovery and restart capability of your environment for both testing and disaster recovery, and restart capability for your open-systems environment for testing and disaster recovery. Being able to test and practice recovery allows you to build skills in order to be ready when a disaster actually occurs.

    GDPS/Global Mirror control code is designed to manage the IBM System Storage Global Mirror copy services, manage the disk configuration, monitor the mirroring environment, and automate management and recovery tasks. It can perform failure recovery from a central point of control. This can provide the ability to synchronize System z and open-systems data at virtually any distance from your primary site.

    The point-in-time copy functionality offered by the IBM System Storage Global Mirror technology allows you to initiate a restart of your database managers on any supported platform, to help reduce complexity and avoid having to create and maintain different recovery procedures for each of your database managers.

    All this helps provide a comprehensive end-to-end disaster recovery solution.

  • GDOC

    IBM Implementation Services for Geographically Dispersed Open Clusters will assist clients who:

    • Require rapid restart capability of multiple-vendor open-system servers at a remote site
    • Have difficulty setting up, testing, and managing multiple-vendor server recovery operations
    • Desire assistance in implementing and configuring data replication
    • Need to install a disaster recovery solution and have it be as easy as possible to operate
    • Need to get support personnel trained with as little expense as possible
    • Have storage servers containing open-systems data requiring disk mirroring at long distances
    • Need to coordinate consistency across multiple platforms and multiple clusters
    • Need a central point of control for data replication management and recovery automation
    • Require unlimited distances between their primary and recovery sites
    • Need to be able to automate the start of an AIX®, Hewlett-Packard (HP) UX, Linux, Microsoft® Windows®, or Sun Solaris server at the remote site

    GDOC is not applicable to a System z configuration, but covers open or distributed systems, and interfaces with GDPS to create an enterprise-level end to end solution.

The offerings listed above can be combined as follows:

  • GDPS/PPRC used with GDPS/XRC (GDPS/MzGM)

    GDPS PPRC/XRC provides the ability to combine the advantages of metropolitan-distance business continuity and regional or long-distance disaster recovery. This can provide a near-continuous availability solution with no data loss and minimum application impact across two sites located at metropolitan distances, and a disaster recovery solution with recovery at an out-of-region site with minimal data loss.

    A typical GDPS PPRC/XRC configuration has the primary disk copying data synchronously to a location within the metropolitan area using Metro Mirror (PPRC), as well as asynchronously to a remote disk subsystem a long distance away via z/OS Global Mirror (XRC). This enables a z/OS three-site high availability and disaster recovery solution for even greater protection from planned and unplanned outages.

    Combining the benefits of PPRC and XRC, GDPS PPRC/XRC enables:

    • HyperSwap capability for near-continuous availability for a disk control unit failure
    • An option designed to enable no data loss
    • Data consistency to allow restart, not recovery
    • A long-distance disaster recovery site for protection against a regional disaster
    • Minimal application impact
    • GDPS automation to manage remote copy pairs, manage a Parallel Sysplex configuration, and perform planned as well as unplanned reconfigurations

    The same primary volume is used for both PPRC and XRC data replication and can support two different GDPSs: GDPS/PPRC for metropolitan distance and business continuity, and GDPS/XRC for regional distance and disaster recovery.

    The two mirroring technologies and GDPS implementations work independently of each other, yet provide the synergy of a common management scheme and common skills.

    Since GDPS/XRC supports System z data only (z/OS, Linux on System z), GDPS Metro/z/OS Global Mirror is a System z solution only.

  • GDPS/PPRC used with GDPS/Global Mirror (GDPS/MGM)

    GDPS Metro/Global Mirror has the benefit of being able to manage across the configuration all formats of data, as Global Mirror is not limited to zSeries® formatted data.

    GDPS Metro/Global Mirror combines the benefits of GDPS/PPRC using Metro Mirror, with GDPS/Global Mirror using IBM System Storage Global Mirror. A typical configuration has the secondary disk from a Metro Mirror remote copy configuration in turn becoming the primary disk for a Global Mirror remote copy pair. Data is replicated in a "cascading" fashion.

    Combining the benefits of PPRC and Global Mirror, GDPS Metro/Global Mirror enables:

    • HyperSwap capability for near-continuous availability for a disk control unit failure
    • An option designed to enable no data loss
    • Maintenance of disaster recovery capability after a HyperSwap
    • Data consistency to allow restart, not recovery, at either Site 2 or Site 3
    • A long-distance disaster recovery site for protection against a regional disaster
    • Minimal application impact
    • GDPS automation to manage remote copy pairs, manage a Parallel Sysplex configuration, and perform planned as well as unplanned reconfigurations

    In addition, GDPS Metro/Global Mirror can do this for both System z as well as open-systems data, and provide consistency between them.

  • GDPS used with distributed cluster managers

    There are several products that support distributed clusters, including:

    • Symantec's Veritas Cluster Server
    • IBM's Tivoli System Automation Application Manager

    The cluster managers can help manage disk remote copy and non-System z applications across sites. GDPS can help manage disk remote copy and System z servers across sites. Using GDPS V3.6 DCM support, GDPS can have hooks into the distributed clusters to pass information and coordinate disaster recovery.

    This appeals to customers who desire a single disaster recovery image across their heterogeneous server environment.

    The GDPS solutions that support DCM include GDPS/PPRC for both VCS and Tivoli SA AppMan, GDPS/XRC for VCS, and GDPS/GM for VCS.

  • BCPM

    IBM Tivoli Business Continuity Process Manager:

    • Assists with defining recovery processes required to identify and recover critical business systems and processes as quickly as possible when an outage occurs in order to reduce the cost to the business
    • Minimizes risk for a successful recovery and provides audit ready reports via simulated test runs
    • Provides an analysis tool for relating triggered incidents to the impacted business in disaster situations
    • Manages pre-tested automated processes so that it can be handled by less-skilled operators
    • Informs management through automatic alerts and can require approval for execution of the recovery plan processes defined
    • Extends the value of GDPS with Tivoli System Automation for recovering the business in addition to the platform technology

Back to topBack to top
Top rule
Reference information
Bottom rule

The following resources are available on the Internet for more information about GDPS:

Previous GDPS-related announcements are:

  • GDPS/PPRC HyperSwap Manager: Providing continuous availability of consistent data, Marketing Announcement 305-015, dated February 15, 2005
  • IBM System z9® 109 - The server built to protect and grow with your on demand enterprise, Hardware Announcement 105-241, dated July 27, 2005
  • IBM Implementation Services for Geographically Dispersed Parallel Sysplex™ for managing disk mirroring using IBM Global Mirroring, Services Announcement 605-035, dated October 18, 2005
  • IBM GDPS V3.3: Improving disaster recovery capabilities to help ensure a highly available, resilient business environment, Marketing Announcement 306-024, dated February 14, 2006
  • IBM GDPS V3.4: Enterprise-wide disaster Recovery, Marketing Announcement 307-045, dated March 13, 2007
  • IBM GDPS V3.5: Enterprise-wide infrastructure availability, Marketing Announcement 308-001, dated February 26, 2008
  • IBM Implementation Services for Geographically Dispersed Open Clusters, Services Announcement 606-029, dated December 12, 2006
  • Enhancements to IBM GDPS to integrate with Geographically Dispersed Open Clusters (GDOC), Services Announcement 607-072, dated December 18, 2007
  • IBM System Storage DS8000 series (M/T 242x) delivers IBM System z capabilities, Hardware Announcement 108-155, dated February 26, 2008
  • IBM Tivoli Business Continuity Process Manager V7.1 provides highly configurable and adaptable processes for planning, testing, and execution of recovery and continuity activities for disaster situations, Software Announcement 208-095, dated May 13, 2008
  • IBM Tivoli System Automation for Multiplatforms V3.1 enables high availability and disaster recovery for applications and IT services running in heterogeneous and virtual IT environments, Software Announcement 208-096, dated May 13, 2008
  • IBM Tivoli System Automation Application Manager V3.1 enables high availability and disaster recovery for composite applications that span complex heterogeneous environments, Software Announcement 208-097, dated May 13, 2008

Back to topBack to top
Top rule
Order now
Bottom rule

To order, contact the Americas Call Centers or your local IBM representative, or your IBM Business Partner.

To identify your local IBM representative or IBM Business Partner, call 800-IBM-4YOU (426-4968).

Phone:      800-IBM-CALL (426-2255)
Fax:        800-2IBM-FAX (242-6329)
Mail:       IBM Teleweb Customer Support
  ® Sales Execution Center, Americas North
            3500 Steeles Ave. East, Tower 3/4
            Markham, Ontario
            L3R 2Z1

Reference: YE001

The Americas Call Centers, our national direct marketing organization, can add your name to the mailing list for catalogs of IBM products.

Note: Shipments will begin after the planned availability date.

DS8000, HyperSwap, ECKD, DFSMS, System Storage, REXX, z9 and Geographically Dispersed Parallel Sysplex are trademarks of IBM Corporation in the United States, other countries, or both.

GDPS, IBM, System z, z/VM, FlashCopy, z/OS, Tivoli, DB2, Parallel Sysplex, PartnerWorld, NetView, AIX, zSeries, System z9 and are registered trademarks of IBM Corporation in the United States, other countries, or both.

Linux is a registered trademark of Linus Torvalds in the United States, other countries, or both.

Microsoft and Windows are registered trademarks of Microsoft Corporation in the United States, other countries, or both.

Other company, product, and service names may be trademarks or service marks of others.

Terms of use

IBM products and services which are announced and available in your country can be ordered under the applicable standard agreements, terms, conditions, and prices in effect at the time. IBM reserves the right to modify or withdraw this announcement at any time without notice. This announcement is provided for your information only. Additional terms of use are located at:

For the most current information regarding IBM products, consult your IBM representative or reseller, or visit the IBM worldwide contacts page


Back to topBack to top
Bottom grey rule
Printable version Printable version 

Share this page

Digg Linked In

Contact IBM

Considering a purchase?