IBM System Storage SAN Volume Controller 6.1.0 Configuration Limits and Restrictions

Preventive Service Planning


Abstract

This document lists the configuration limits and restrictions specific to SAN Volume Controller software version 6.1.0.

Content

Restrictions
SAN Volume Controller 6.1.0 does not currently support use of solid-state drives (SSDs) in 2145-CF8 nodes.


DS4000 Maintenance

SAN Volume Controller supports concurrent ESM firmware upgrades for those DS4000 models listed as such on the Supported Hardware List when they are running either 06.23.05.00 or later controller firmware. However, controllers running firmware levels earlier than 06.23.05.00 will not be supported for concurrent ESM upgrades. Customers in this situation, who wish to gain support for concurrent ESM upgrades, will need to first upgrade the DS4000 controller firmware level to 06.23.05.00. This action is a controller firmware upgrade, not an ESM upgrade and concurrent controller firmware upgrades are already supported in conjunction with SAN Volume Controller. Once the controller firmware is at 06.23.05.00 or later the ESM firmware can be upgraded concurrently.

Note: The ESM firmware upgrade must be done on one disk expansion enclosure at a time. A 10 minute delay from when one enclosure is upgraded to the start of the upgrade of another enclosure is required. Confirm via the Storage Manager applications "Recovery Guru" that the DS4000 status is in an optimal state before upgrading the next enclosure. If it is not, then do not continue ESM firmware upgrades until the problem is resolved.


Host Limitations

Windows NTP server

The Linux NTP client used by SAN Volume Controller may not always function correctly with Windows W32Time NTP Server

Windows SAN Boot Clusters (MSCS):

It is possible to SAN Boot a Microsoft Cluster subject to the following restrictions imposed by Microsoft:
  • Windows 2000 Server clusters require that the boot disk be on a different storage bus to the cluster server disks.
  • On Windows 2003, clustered disks and boot disks can be presented on the same storage bus, but ONLY if the Storport driver is being used.

These restrictions and more are described in the Microsoft White Paper: "Microsoft Windows Clustering: Storage Area Networks".

We have not tested, and therefore do not support, modifying the registry key as suggested on page 28 (which would allow boot disks and clustered disks on the same storage bus on Windows 2003 without the Storport driver).


Oracle

Oracle Version and OS
Restrictions that apply:
Oracle RAC 10g on Windows: 1
Oracle RAC 10g on AIX: 1, 2
Oracle RAC 11g on AIX: 2
Oracle RAC 10g on HP-UX11.31: 1, 2
Oracle RAC 11g on HP-UX11.31: 1, 2
Oracle RAC 10g on HP-UX11.23: 1, 2
Oracle RAC 11g on HP-UX11.23: 1, 2
Oracle RAC 10g on Linux Host: 1, 3

Restriction 1: ASM cannot recognise the size change of the disk when SVC disk is resized unless the disk is removed from ASM and included again.

Restriction 2: After an ASM disk group has successfully dropped a disk, the disk cannot be deleted from the OS. The workaround to the OS restriction is to bring down the ASM instance, delete the disk from the OS, and bring up the ASM instance again.

Restriction 3: For RHEL4 set Oracle Clusterware 'misscount' parameter to a bigger one to allow SDD to do path failover first. The default miscount setting 60s is too short for SDD. We recommend to set it 90s or 120s. Command to use: crsctl set css misscount 90



Maximum Configurations

Configuration limits for SAN Volume Controller 6.1.0:

Property
Maximum Number
Comments
Cluster Properties
Nodes per cluster
8
Arranged as four I/O groups
Nodes per fabric
64
Maximum number of nodes that can be present on the same Fibre Channel fabric, with visibility of each other
I/O groups per cluster
4
Each containing two nodes
Fabrics per cluster
4
The number of counterpart Fibre Channel SANs which are supported
Inter-cluster partnerships per cluster
3
A cluster may partnered with up to three remote clusters. No more than four clusters may be in the same connected set
Node Properties
Logins per node Fibre Channel port
512
Includes logins from server HBAs, disk controller ports, node ports within the same cluster and node ports from remote clusters
iSCSI sessions per node
256
512 in IP failover mode (when partner node is unavailable)
Managed Disk Properties
Managed disks (MDisks) per cluster
4096
The maximum number of logical units which can be managed by a cluster.

This number includes external MDisks which have not been configured into storage pools (managed disk groups)
Managed disks per storage pool (managed disk group)
128
Storage pools (managed disk groups) per cluster
128
Managed disk extent size
8192 MB
Capacity for an individual internal managed disk (array)
n/a
Internal drives in 2145-CF8 nodes are currently not supported with SVC 6.1.0
Capacity for an individual external managed disk
256 TB
Note: External managed disks larger than 2 TB are only supported for certain types of storage systems. Refer to the supported hardware matrix for further details.
Total storage capacity manageable per cluster
32 PB
Requires maximum extent size of 8192 MB to be used.

This limit represents the per-cluster maximum of 2^22 extents.
Volume (Virtual Disk) Properties
Volumes (VDisks) per cluster
8192
Maximum requires an 8-node cluster; refer to the volumes per I/O group limit below
Volumes per I/O group
2048
Volumes per storage pool (managed disk group)
-
No limit is imposed beyond the volumes per-cluster limit
Fully-allocated volume capacity
256 TB
Maximum size for an individual fully-allocated volume
Thin-provisioned (space-efficient) volume capacity
256 TB
Maximum size for an individual thin-provisioned volume
Host mappings per cluster
20,000
See also - volume mappings per host object below.
Mirrored Volume (Virtual Disk) Properties
Copies per Volume
2
Volume copies per cluster
8192
The maximum number of volumes cannot all have the maximum number of copies
Total mirrored volume capacity per I/O group
1024 TB
Generic Host Properties
Host objects (IDs) per cluster
1024
A host object may contain both Fibre Channel ports and iSCSI names
Host objects (IDs) per I/O group
256
Refer to the additional Fibre Channel and iSCSI host limits below
Volume mappings per host object
512
Total Fibre Channel ports and iSCSI names per cluster
2048
Total Fibre Channel ports and iSCSI names per I/O group
512
Total Fibre Channel ports and iSCSI names per host object
512
Fibre Channel Host Properties
Fibre Channel hosts per cluster
1024 - Cisco, Brocade and McDATA fabrics

256 - QLogic fabrics
See also - Fibre Channel hosts per I/O group below
Fibre Channel host ports per cluster
2048 - Cisco, McDATA and Brocade fabrics

512 - QLogic fabrics
Fibre Channel hosts per I/O group
256 - Cisco, McDATA and Brocade fabrics

64 - QLogic fabrics
Fibre Channel host ports per I/O group
512 - Cisco, McDATA and Brocade fabrics

128 - QLogic fabrics
Fibre Channel hosts ports per host object (ID)
512
iSCSI Host Properties
iSCSI hosts per cluster
1024
See also - iSCSI hosts per I/O group below
iSCSI hosts per I/O group
256
iSCSI names per host object
256
iSCSI names per I/O group
256
Copy Services Properties
Remote Copy (Metro Mirror and Global
Mirror) relationships per
cluster
8192
This can be any mix of Metro Mirror and Global Mirror relationships.
Maximum requires an 8-node cluster (volumes per I/O group limit applies)
Remote Copy relationships per consistency group
-
No limit is imposed beyond the Remote Copy relationships per cluster limit
Remote Copy consistency
groups per cluster
256
Total Metro Mirror and Global Mirror volume capacity per I/O group
1024 TB
This limit is the total capacity for all master and auxiliary volumes in the I/O group.
FlashCopy mappings per cluster
4096
FlashCopy targets
per source
256
Cascaded Incremental FlashCopy maps
4
A volume can be the source of up to 4 incremental FlashCopy maps. If this number of maps is exceeded then the FlashCopy behaviour for that cascade becomes non-incremental.
FlashCopy mappings
per consistency group
512
FlashCopy consistency
groups per cluster
127
FlashCopy volume capacity
2 TB
Maximum size for a FlashCopy source or target volume - this restriction is lifted in V6.1.0.9 and higher.
Total FlashCopy volume capacity per I/O group
1024 TB
External Storage System Properties
Storage system WWNNs per cluster
1024
Storage system WWPNs per cluster
1024
WWNNs per storage system
16
WWPNs per WWNN
16
LUNs (managed disks) per storage system
-
No limit is imposed beyond the managed disks per cluster limit
Cluster and User Management Properties
User accounts per cluster
400
Includes the default user accounts
User groups per cluster
256
Includes the default user groups
Authentication servers per cluster
1
NTP servers per cluster
1
iSNS servers per cluster
1
Concurrent open SSH sessions per cluster
10
Event Notification Properties
SNMP servers per cluster
6
Syslog servers per cluster
6
Email (SMTP) servers per cluster
6
Email servers are used in turn until the email is successfully sent
Email users (recipients) per cluster
12

Rate this page:

(0 users)Average rating

Add comments

Document information


More support for:

SAN Volume Controller
6.1

Version:

6.1.0

Operating system(s):

Platform Independent

Reference #:

S1003704

Modified date:

2011-06-23

Translate my page

Machine Translation

Content navigation