IBM Support

IBM System Storage SAN Volume Controller 6.1.0 Configuration Limits and Restrictions

Preventive Service Planning


This document lists the configuration limits and restrictions specific to SAN Volume Controller software version 6.1.0.


SAN Volume Controller 6.1.0 does not currently support use of solid-state drives (SSDs) in 2145-CF8 nodes.

DS4000 Maintenance

SAN Volume Controller supports concurrent ESM firmware upgrades for those DS4000 models listed as such on the Supported Hardware List when they are running either or later controller firmware. However, controllers running firmware levels earlier than will not be supported for concurrent ESM upgrades. Customers in this situation, who wish to gain support for concurrent ESM upgrades, will need to first upgrade the DS4000 controller firmware level to This action is a controller firmware upgrade, not an ESM upgrade and concurrent controller firmware upgrades are already supported in conjunction with SAN Volume Controller. Once the controller firmware is at or later the ESM firmware can be upgraded concurrently.

Note: The ESM firmware upgrade must be done on one disk expansion enclosure at a time. A 10 minute delay from when one enclosure is upgraded to the start of the upgrade of another enclosure is required. Confirm via the Storage Manager applications "Recovery Guru" that the DS4000 status is in an optimal state before upgrading the next enclosure. If it is not, then do not continue ESM firmware upgrades until the problem is resolved.

Host Limitations

Windows NTP server

The Linux NTP client used by SAN Volume Controller may not always function correctly with Windows W32Time NTP Server

Windows SAN Boot Clusters (MSCS):

It is possible to SAN Boot a Microsoft Cluster subject to the following restrictions imposed by Microsoft:
  • Windows 2000 Server clusters require that the boot disk be on a different storage bus to the cluster server disks.
  • On Windows 2003, clustered disks and boot disks can be presented on the same storage bus, but ONLY if the Storport driver is being used.

These restrictions and more are described in the Microsoft White Paper: "Microsoft Windows Clustering: Storage Area Networks".

We have not tested, and therefore do not support, modifying the registry key as suggested on page 28 (which would allow boot disks and clustered disks on the same storage bus on Windows 2003 without the Storport driver).


Oracle Version and OS
Restrictions that apply:
Oracle RAC 10g on Windows: 1
Oracle RAC 10g on AIX: 1, 2
Oracle RAC 11g on AIX: 2
Oracle RAC 10g on HP-UX11.31: 1, 2
Oracle RAC 11g on HP-UX11.31: 1, 2
Oracle RAC 10g on HP-UX11.23: 1, 2
Oracle RAC 11g on HP-UX11.23: 1, 2
Oracle RAC 10g on Linux Host: 1, 3

Restriction 1: ASM cannot recognise the size change of the disk when SVC disk is resized unless the disk is removed from ASM and included again.

Restriction 2: After an ASM disk group has successfully dropped a disk, the disk cannot be deleted from the OS. The workaround to the OS restriction is to bring down the ASM instance, delete the disk from the OS, and bring up the ASM instance again.

Restriction 3: For RHEL4 set Oracle Clusterware 'misscount' parameter to a bigger one to allow SDD to do path failover first. The default miscount setting 60s is too short for SDD. We recommend to set it 90s or 120s. Command to use: crsctl set css misscount 90

Maximum Configurations

Configuration limits for SAN Volume Controller 6.1.0:

Maximum Number
Cluster Properties
Nodes per cluster
Arranged as four I/O groups
Nodes per fabric
Maximum number of nodes that can be present on the same Fibre Channel fabric, with visibility of each other
I/O groups per cluster
Each containing two nodes
Fabrics per cluster
The number of counterpart Fibre Channel SANs which are supported
Inter-cluster partnerships per cluster
A cluster may partnered with up to three remote clusters. No more than four clusters may be in the same connected set
Node Properties
Logins per node Fibre Channel port
Includes logins from server HBAs, disk controller ports, node ports within the same cluster and node ports from remote clusters
iSCSI sessions per node
512 in IP failover mode (when partner node is unavailable)
Managed Disk Properties
Managed disks (MDisks) per cluster
The maximum number of logical units which can be managed by a cluster.

This number includes external MDisks which have not been configured into storage pools (managed disk groups)
Managed disks per storage pool (managed disk group)
Storage pools (managed disk groups) per cluster
Managed disk extent size
8192 MB
Capacity for an individual internal managed disk (array)
Internal drives in 2145-CF8 nodes are currently not supported with SVC 6.1.0
Capacity for an individual external managed disk
256 TB
Note: External managed disks larger than 2 TB are only supported for certain types of storage systems. Refer to the supported hardware matrix for further details.
Total storage capacity manageable per cluster
32 PB
Requires maximum extent size of 8192 MB to be used.

This limit represents the per-cluster maximum of 2^22 extents.
Volume (Virtual Disk) Properties
Volumes (VDisks) per cluster
Maximum requires an 8-node cluster; refer to the volumes per I/O group limit below
Volumes per I/O group
Volumes per storage pool (managed disk group)
No limit is imposed beyond the volumes per-cluster limit
Fully-allocated volume capacity
256 TB
Maximum size for an individual fully-allocated volume
Thin-provisioned (space-efficient) volume capacity
256 TB
Maximum size for an individual thin-provisioned volume
Host mappings per cluster
See also - volume mappings per host object below.
Mirrored Volume (Virtual Disk) Properties
Copies per Volume
Volume copies per cluster
The maximum number of volumes cannot all have the maximum number of copies
Total mirrored volume capacity per I/O group
1024 TB
Generic Host Properties
Host objects (IDs) per cluster
A host object may contain both Fibre Channel ports and iSCSI names
Host objects (IDs) per I/O group
Refer to the additional Fibre Channel and iSCSI host limits below
Volume mappings per host object
Total Fibre Channel ports and iSCSI names per cluster
Total Fibre Channel ports and iSCSI names per I/O group
Total Fibre Channel ports and iSCSI names per host object
Fibre Channel Host Properties
Fibre Channel hosts per cluster
1024 - Cisco, Brocade and McDATA fabrics

256 - QLogic fabrics
See also - Fibre Channel hosts per I/O group below
Fibre Channel host ports per cluster
2048 - Cisco, McDATA and Brocade fabrics

512 - QLogic fabrics
Fibre Channel hosts per I/O group
256 - Cisco, McDATA and Brocade fabrics

64 - QLogic fabrics
Fibre Channel host ports per I/O group
512 - Cisco, McDATA and Brocade fabrics

128 - QLogic fabrics
Fibre Channel hosts ports per host object (ID)
iSCSI Host Properties
iSCSI hosts per cluster
See also - iSCSI hosts per I/O group below
iSCSI hosts per I/O group
iSCSI names per host object
iSCSI names per I/O group
Copy Services Properties
Remote Copy (Metro Mirror and Global
Mirror) relationships per
This can be any mix of Metro Mirror and Global Mirror relationships.
Maximum requires an 8-node cluster (volumes per I/O group limit applies)
Remote Copy relationships per consistency group
No limit is imposed beyond the Remote Copy relationships per cluster limit
Remote Copy consistency
groups per cluster
Total Metro Mirror and Global Mirror volume capacity per I/O group
1024 TB
This limit is the total capacity for all master and auxiliary volumes in the I/O group.
FlashCopy mappings per cluster
FlashCopy targets
per source
Cascaded Incremental FlashCopy maps
A volume can be the source of up to 4 incremental FlashCopy maps. If this number of maps is exceeded then the FlashCopy behaviour for that cascade becomes non-incremental.
FlashCopy mappings
per consistency group
FlashCopy consistency
groups per cluster
FlashCopy volume capacity
2 TB
Maximum size for a FlashCopy source or target volume - this restriction is lifted in V6.1.0.9 and higher.
Total FlashCopy volume capacity per I/O group
1024 TB
External Storage System Properties
Storage system WWNNs per cluster
Storage system WWPNs per cluster
WWNNs per storage system
LUNs (managed disks) per storage system
No limit is imposed beyond the managed disks per cluster limit
Cluster and User Management Properties
User accounts per cluster
Includes the default user accounts
User groups per cluster
Includes the default user groups
Authentication servers per cluster
NTP servers per cluster
iSNS servers per cluster
Concurrent open SSH sessions per cluster
Event Notification Properties
SNMP servers per cluster
Syslog servers per cluster
Email (SMTP) servers per cluster
Email servers are used in turn until the email is successfully sent
Email users (recipients) per cluster

Document information

More support for: SAN Volume Controller

Version: 6.1.0

Operating system(s): Platform Independent

Reference #: S1003704

Modified date: 23 June 2011

Translate this page: