V5.1.x - IBM System Storage SAN Volume Controller Restrictions

Preventive Service Planning


Abstract

This document lists the restrictions specific to SAN Volume Controller V5.1.x. There may be additional restrictions imposed on hardware attached to a SAN Volume Controller cluster e.g. switches and storage etc.

Content

DS4000 Maintenance

Host Limitations
SAN Fibre Networks
SAN Routers and Fibre Channel Extenders
SAN Maintenance
SAN Volume Controller Software Upgrade
Maximum Configurations



DS4000 Maintenance

SVC supports concurrent ESM firmware upgrades for those DS4000 models listed as such on the "Supported Hardware List" when they are running either 06.23.05.00 or later controller firmware. However, controllers running firmware levels earlier than 06.23.05.00 will not be supported for concurrent ESM upgrades. Customers in this situation, who wish to gain support for concurrent ESM upgrades, will need to first upgrade the DS4000 controller firmware level to 06.23.05.00. This action is a controller firmware upgrade, not an ESM upgrade and concurrent controller firmware upgrades are already supported in conjunction with SVC. Once the controller firmware is at 06.23.05.00 or later the ESM firmware can be upgraded concurrently.

Note: The ESM firmware upgrade must be done on one disk expansion enclosure at a time. A 10 minute delay from when one enclosure is upgraded to the start of the upgrade of another enclosure is required. Confirm via the Storage Manager applications "Recovery Guru" that the DS4000 status is in an optimal state before upgrading the next enclosure. If it is not, then do not continue ESM firmware upgrades until the problem is resolved.



Host Limitations

Windows NTP server

The Linux NTP client used by SAN Volume Controller may not always function correctly with Windows W32Time NTP Server

Windows SAN Boot Clusters (MSCS):

It is possible to SAN Boot a Microsoft Cluster subject to the following restrictions imposed by Microsoft:
  • Windows 2000 Server clusters require that the boot disk be on a different storage bus to the cluster server disks.
  • On Windows 2003, clustered disks and boot disks can be presented on the same storage bus, but ONLY if the Storport driver is being used.

These restrictions and more are described in the Microsoft White Paper: "Microsoft Windows Clustering: Storage Area Networks".

We have not tested, and therefore do not support, modifying the registry key as suggested on page 28 (which would allow boot disks and clustered disks on the same storage bus on Windows 2003 without the Storport driver).


Oracle

Oracle Version and OS
Restrictions that apply:
Oracle RAC 10g on Windows: 1
Oracle RAC 10g on AIX: 1, 2
Oracle RAC 11g on AIX: 2
Oracle RAC 10g on HP-UX11.31: 1, 2
Oracle RAC 11g on HP-UX11.31: 1, 2
Oracle RAC 10g on HP-UX11.23: 1, 2
Oracle RAC 11g on HP-UX11.23: 1, 2
Oracle RAC 10g on Linux Host: 1, 3

Restriction 1: ASM cannot recognise the size change of the disk when SVC disk is resized unless the disk is removed from ASM and included again.

Restriction 2: After an ASM disk group has successfully dropped a disk, the disk cannot be deleted from the OS. The workaround to the OS restriction is to bring down the ASM instance, delete the disk from the OS, and bring up the ASM instance again.

Restriction 3: For RHEL4 set Oracle Clusterware 'misscount' parameter to a bigger one to allow SDD to do path failover first. The default miscount setting 60s is too short for SDD. We recommend to set it 90s or 120s. Command to use: crsctl set css misscount 90



SAN Fibre Networks

Please refer to this document for details on how to configure a supported SAN:

IBM System Storage SAN Volume Controller V5.1.0 - Software Installation and Configuration Guide



SAN Routers and Fibre Channel Extenders

Fibre Channel Extender Technologies:

IBM will support any fibre channel extender technology provided that it is planned, installed and tested to meet the requirements specified in the Software Installation and Configuration Guide:

IBM System Storage SAN Volume Controller V5.1.0 - Software Installation and Configuration Guide


SAN Router Technologies:

There are distance restrictions imposed due to latency. The amount of latency which can be tolerated depends on the type of copy services being used (Metro Mirror or Global Mirror). Details of the maximum latencies supported can be found in the Software Installation and Configuration Guide:

IBM System Storage SAN Volume Controller V5.1.0 - Software Installation and Configuration Guide



SAN Maintenance

A number of maintenance operations in SAN fabrics have been observed to occasionally cause IO errors for certain types of hosts. To avoid these errors, IO on these hosts must be quiesced prior to doing any type of SAN re-configuration activity, switch maintenance or SAN Volume Controller maintenance (see later section for Concurrent Code Load restrictions).
  1. Linux RH EL 2.1 AS and 3 AS



SAN Volume Controller Software Upgrade

I/O errors have occasionally been observed during cluster software upgrades with hosts running the operating system levels below. All I/O should be quiesced on these systems before a software upgrade is started and should not be restarted until the upgrade is complete.
  1. Linux RH EL 2.1 AS and 3 AS
  2. Solaris 9 on SBus based systems

Prior to starting a software upgrade, the SAN Volume Controller error log must be checked and any error conditions must be resolved and marked as fixed. All host paths must be online, and the SAN fabric must be fully redundant with no failed paths. If inter-cluster Remote Copy (Metro Mirror or Global Mirror) is being used, the same checks must be made on the remote clusters.



Maximum Configurations

Ensure that you are familiar with the maximum configurations for SAN Volume Controller 5.1.0:

Property
Maximum Number
Comments
Cluster Properties
Nodes per cluster
8
Arranged as four I/O groups
Nodes per fabric
64
Maximum number of nodes that can be present on the same fabric, with visibility of each other
I/O groups per cluster
4
Each containing two nodes
Fabrics per cluster
4
The number of counterpart SANs which are supported
Inter-cluster partnerships per cluster
3
A cluster may partnered with up to three remote clusters. No more than four clusters may be in the same connected set
Node Properties
Logins per node Fibre Channel port
512
Includes logins from server HBAs, disk controller ports, SVC node ports within the same cluster and SVC node ports from remote clusters
Managed Disk Properties
Managed disks (MDisks) per cluster
4096
The maximum number of logical units which can be managed by SVC. The number includes disks which have not been configured into managed disk groups
Managed disks per managed disk group
128
Managed disk groups per cluster
128
Managed disk capacity
2 TB
Maximum size for an individual logical unit
Total storage capacity manageable per cluster
8 PB
Requires maximum extent size of 2048 MB to be used.
This limit represents the per-cluster maximum of 2^22 extents.
Virtual Disk Properties
Virtual disks (VDisks) per cluster
8192
Maximum requires an 8-node cluster; refer to the virtual disks per I/O group limit below
Virtual disks per I/O group
2048
Virtual disks per managed disk group
-
No limit is imposed beyond the per-cluster VDisk limit
Fully-allocated virtual disk capacity
256 TB
Requires maximum extent size of 2048 MB to be used.
This limit represents the per VDisk maximum of 2^17 extents.

Note: Do Not Use VDisks Larger than 2TB in FlashCopy Mappings
Space-efficient virtual disk capacity
260,000 GB
Requires maximum extent size of 2048 MB to be used.
This limit is less than the fully-allocated virtual disk capacity due to the additional capacity required for the space-efficient metadata.

Note: Do Not Use VDisks Larger than 2TB in FlashCopy Mappings
Virtual disk mappings per host object
512
Virtual disk to host mappings per cluster
20,000
Mirrored Virtual Disk Properties
Copies per VDisk
2
VDisk copies per cluster
8192
The maximum number of VDisks cannot all have the maximum number of copies
Total Mirrored VDisk capacity per I/O group
1024 TB
This maximum configuration will consume all 512 MB of bitmap space for the I/O group and allow no Metro or Global Mirror or FlashCopy bitmap space.
Generic Host Properties
Host objects (IDs) per cluster
1024
A host object may contain both Fibre Channel ports and iSCSI names
Host objects (IDs) per I/O group
256
Refer to the additional Fibre Channel and iSCSI host limits below
Total Fibre Channel ports and iSCSI names per cluster
2048
Total Fibre Channel ports and iSCSI names per I/O group
512
Total Fibre Channel ports and iSCSI names per host object
512
Fibre Channel Host Properties
Fibre Channel hosts per cluster
1024 - Cisco, Brocade and McDATA fabrics

155 - CNT

256 - QLogic
See also - Fibre Channel hosts per I/O group below.

For Brocade support, please see Note 2below this table.
Fibre Channel host ports per cluster
2048 - Cisco, McDATA and Brocade fabrics

310 - CNT

512 - QLogic
Fibre Channel hosts per I/O group
256 - Cisco, McDATA and Brocade fabrics

N/A - CNT

64 - QLogic
Fibre Channel host ports per I/O group
512 - Cisco, McDATA and Brocade fabrics

N/A - CNT

128 - QLogic
Fibre Channel hosts ports per host object (ID)
512
iSCSI Host Properties
iSCSI hosts per cluster
256
See also - iSCSI hosts per I/O group below
iSCSI hosts per I/O group
64
iSCSI names per host object
256
iSCSI names per I/O group
256
Copy Services
Remote Copy (Metro Mirror and Global
Mirror) relationships per
cluster
8192
This can be any mix of Metro Mirror and Global Mirror relationships.
Maximum requires an 8-node cluster (Virtual disk per I/O group limit applies)
Remote Copy relationships per consistency group
-
No limit is imposed beyond the Remote Copy relationships per cluster limit
Remote Copy consistency
groups per cluster
256
Total Metro Mirror and Global Mirror VDisk capacity per I/O group
1024 TB
This limit is the total capacity for all master and auxiliary VDisks in the I/O group.

This maximum configuration will consume all 512 MB of bitmap space for the I/O group and allow no FlashCopy or VDisk mirroring bitmap space.
FlashCopy mappings per cluster
4096
FlashCopy targets
per source
256
Cascaded Incremental FlashCopy maps
4
A virtual disk can be the source of up to 4 incremental FlashCopy maps. If this number of maps is exceeded then the FlashCopy behaviour becomes non-incremental.
FlashCopy mappings
per consistency group
512
FlashCopy consistency
groups per cluster
127
Total FlashCopy VDisk capacity per I/O group
1024 TB
This is a per I/O group limit on the total capacity for all FlashCopy mappings using bitmap space from a given I/O Group.

This maximum configuration will consume all 512 MB of bitmap space for the I/O Group and allow no Metro or Global Mirror or VDisk mirroring bitmap space.

Note: Do Not Use VDisks Larger than 2TB in FlashCopy Mappings
Copy services bitmap memory per I/O group
512 MB
Storage System Properties
Storage controller WWNNs per cluster
64
Note: One storage controller WWNN is assigned per 2145-CF8 node containing Solid-State Drives (SSDs).

Some storage controllers have a separate WWNN per port e.g. Hitachi Thunder
Storage controller
WWPNs per cluster
256
WWNNs per storage system
16
WWPNs per WWNN
16
LUNs (managed disks) per cluster
4096
LUNs (managed disks) per storage controller
-
No limit is imposed beyond the LUNs (managed disks) per cluster limit
Internal Solid-State Drives per node
4
Applicable only to 2145-CF8 nodes

Note 1: Fabric and device support
A statement of support for a particular fabric configuration here is reflects the fact that SVC has been tested and is supported for attachment to that fabric configuration. Similarly a statement that SVC supports attachment to a particular backend device or host type reflects the fact that SVC has been tested and is supported for that attachment. SVC is only supported however for attachment to particular devices in a given fabric if IBM and that fabric vendor both support that attachment. It is the user’s responsibility to verify that this is true for the particular configuration of interest as it is impossible to list individual ‘support’ or ‘no support’ statements for every possible intermix of front end and backend devices and fabric types.

Note 2: Support for large fabrics (>64 hosts)
The following restrictions apply to support for fabrics with up to 1024 fibre channel hosts with SVC 5.1.x:

1. All switches with more than 64 ports are supported as core switches with the exception of the Brocade M12. Any supported switch may be used as edge switch in this configuration. The SVC ports and backend storage must all be connected to the core switches.

2. The minimum supported firmware level for Brocade core switches is 5.1.0c.

3. Each SVC port must not see more than 512 N port logins. Error code 1800 is logged if this limit is exceeded.

4. Each I/O group may not be associated with more than 256 host objects.

5. A host object may be associated with one or more I/O groups - if it is associated with more than one I/O group it counts towards the maximum 256 total in all of the I/O groups it is associated with.

Rate this page:

(0 users)Average rating

Add comments

Document information


More support for:

SAN Volume Controller
V5.1.x

Version:

5.1.x

Operating system(s):

Platform Independent

Software edition:

N/A

Reference #:

S1003555

Modified date:

2010-09-23

Translate my page

Machine Translation

Content navigation