Use this process to perform failover and restore operations to
your remote (C) site during a planned outage.
Before you issue a failover operation to the remote site, ensure
that data processing has completely stopped at the local and intermediate
sites. If you fail to do so and data is copied to the A and B volume
pairs at the local and intermediate sites, a failover to the remote
site can cause a data loss problem. This is the responsibility of
the user, since it cannot be enforced by the TSO commands or the API.
This scenario describes the steps in which a failover operation
is done to move production from the local site to a remote site and
then a failback operation is done when processing is ready to return
to the local site. Assume that host I/O cannot be sent to the local
site in a Metro/Global Mirror configuration and it is not possible
to run your systems using the B volumes at the intermediate site.
You can switch operations to your remote site, which allows the processing
of data to resume at the remote site. The Global Copy relationships
between volumes at the intermediate and remote site are still operational.
Global Mirror continues to operate between these two sites.
Follow these steps for failover and restore operations at the remote
site:
- At the local site, ensure that data consistency is achieved between
the A and B volume pairs.
You can use either one of the following methods to create data consistency:
- Quiesce I/O processing to the A volumes at the local site.
- Freeze write activity to the Metro Mirror primary volumes by performing
the following steps:
- Freeze updates to the A volumes in Metro Mirror relationships
across the affected LSSs. This ensures that the B volumes are consistent
at the time of the freeze. (One command per storage unit or LSS is
required.)
- Resume operations following a freeze. This operation also called
a thaw operation and it allows I/O processing to resume for the specified
volume pairs.
- Verify that the last data from the local site has been included
in a Global Mirror consistency group. Monitor this activity to determine
when at least two consistency groups have formed since the local site
I/O was quiesced or the freezes were issued. The total successful
consistency group count field from the query output displays this
information. At this point, the data on the B, C, and D volumes is
consistent.
- Stop the Global Mirror session.
- Verify that the Global Mirror session has ended. Consistency groups
will not be forming when Global Mirror processing is stopped.
- Delete the relationships between the B and C volume pairs at the
intermediate and remote sites. This prepares for reversing the direction
of the volume pair from the remote site to the intermediate site.
The cascaded relationship ends as well. Note: When the relationships
between the B and C volumes are deleted, the cascade parameter is
disabled for the B volumes and the B volumes are no longer detected
as being in cascaded relationships.
- Issue a failover command to the B and A volume pairs, with the
Cascade option. With this process, updates are collected using the
change recording feature, which allows for the resynchronization of
the B and A volumes.
- Create Global Copy relationships using the C and B volume pairs.
Specify the NOCOPY option. Note: You can specify the NOCOPY option
the B and C volumes contain exact copies of data.
- Start I/O processing at the remote site. Continue in this mode
until production is ready to return to the local site.
- When you are ready to return production to the local site, quiesce
I/O processing at the remote site. This process is used to begin the
transition back host I/O to the A volumes.
- Wait for the number of out-of-sync tracks on the C and B volume
to reach zero. You can monitor this activity by querying the status
of the C and B volumes. As soon as the number of out-of-sync tracks
reaches zero, all data has been copied and the data on the C and B
volumes is equal. All updates that are needed to resynchronize the
A volumes are recorded at the B volumes.
- Reestablish paths (that were disabled by the freeze operation)
between the local site LSS and intermediate site LSS that contain
the B to A Metro Mirror volume pairs.
- Issue a failback command to the B volumes to A volume pairs. This
command copies the changes back to the A volumes that were made to
the B volumes while hosts were running on the B volumes. The A volumes
are now synchronized with the B volumes.
- Wait for the copy process of the B and A volume pairs to reach
full duplex (all out-of-sync tracks have completed copying). You can
monitor this activity by querying the status of the B and A volumes.
As soon as the number of out-of-sync tracks reaches zero, all data
has been copied and the data on the B and A volumes is equal. At this
point, the data on volumes A, B, and C is equal.
- Delete the Global Copy relationships between the C and B volume
pairs between the intermediate and remote sites. Deleting the Global
Copy relationships between the C to B volume pairs prepares for restoring
to the original Global Copy relationships between the B to C volume
pairs.
- Issue a failover command to the A and B volume pairs. This process
ends the Metro Mirror relationships between the B and A volumes and
establishes the Metro Mirror relationships between the A and B volumes.
- Reestablish paths (that were disabled by the freeze operation)
between the local site LSS and the intermediate site LSS that contain
the B to A Metro Mirror volume pairs.
- Issue a failback command to the A volumes to B volumes. This command
copies the changes back to the A volumes that were made to the B volumes
in Metro Mirror relationships while hosts were running on the B volumes.
The A volumes are now synchronized with the B volumes.
- Reestablish the B to C volume pairs in Global Copy relationships.
Specify the NOCOPY and the Cascade options.
- Use FlashCopy® to create
a copy of C source volumes to the D target volumes, specifying the
ASYNC option.
- Restart Global Mirror processing.
- Resume host I/O processing to the A volumes.