IBM Storwize V7000 6.1.0 Configuration Limits and Restrictions

Preventive Service Planning


Abstract

This document lists the configuration limits and restrictions specific to IBM Storwize V7000 software version 6.1.0.

Content

Restrictions
Storwize V7000 software versions 6.1.0.0 to 6.1.0.6 support attachment of up to 4 expansion enclosures per system. Software version 6.1.0.7 and later removes this restriction, supporting attachment of up to 9 expansion enclosures, allowing a total of 10 enclosures per system.



DS4000 Maintenance

Storwize V7000 supports concurrent ESM firmware upgrades for those DS4000 models listed as such on the Supported Hardware List when they are running either 06.23.05.00 or later controller firmware. However, controllers running firmware levels earlier than 06.23.05.00 will not be supported for concurrent ESM upgrades. Customers in this situation, who wish to gain support for concurrent ESM upgrades, will need to first upgrade the DS4000 controller firmware level to 06.23.05.00. This action is a controller firmware upgrade, not an ESM upgrade and concurrent controller firmware upgrades are already supported in conjunction with Storwize V7000. Once the controller firmware is at 06.23.05.00 or later the ESM firmware can be upgraded concurrently.

Note: The ESM firmware upgrade must be done on one disk expansion enclosure at a time. A 10 minute delay from when one enclosure is upgraded to the start of the upgrade of another enclosure is required. Confirm via the Storage Manager applications "Recovery Guru" that the DS4000 status is in an optimal state before upgrading the next enclosure. If it is not, then do not continue ESM firmware upgrades until the problem is resolved.


Host Limitations

Windows NTP server

The Linux NTP client used by Storwize V7000 may not always function correctly with Windows W32Time NTP Server

Windows SAN Boot Clusters (MSCS):

It is possible to SAN Boot a Microsoft Cluster subject to the following restrictions imposed by Microsoft:
  • On Windows 2003, clustered disks and boot disks can be presented on the same storage bus, but ONLY if the Storport driver is being used.

These restrictions and more are described in the Microsoft White Paper: "Microsoft Windows Clustering: Storage Area Networks".

We have not tested, and therefore do not support, modifying the registry key as suggested on page 28 (which would allow boot disks and clustered disks on the same storage bus on Windows 2003 without the Storport driver).


Oracle

Oracle Version and OS
Restrictions that apply:
Oracle RAC 10g on Windows: 1
Oracle RAC 10g on AIX: 1, 2
Oracle RAC 11g on AIX: 2
Oracle RAC 10g on HP-UX11.31: 1, 2
Oracle RAC 11g on HP-UX11.31: 1, 2
Oracle RAC 10g on HP-UX11.23: 1, 2
Oracle RAC 11g on HP-UX11.23: 1, 2
Oracle RAC 10g on Linux Host: 1, 3

Restriction 1: ASM cannot recognise the size change of the disk when Storwize V7000 disk is resized unless the disk is removed from ASM and included again.

Restriction 2: After an ASM disk group has successfully dropped a disk, the disk cannot be deleted from the OS. The workaround to the OS restriction is to bring down the ASM instance, delete the disk from the OS, and bring up the ASM instance again.

Restriction 3: For RHEL4 set Oracle Clusterware 'misscount' parameter to a bigger one to allow SDD to do path failover first. The default miscount setting 60s is too short for SDD. We recommend to set it 90s or 120s. Command to use: crsctl set css misscount 90



Maximum Configurations

Configuration limits for Storwize V7000 software version 6.1.0:

Property
Maximum Number
Comments
Cluster Properties
Nodes per cluster (system)
2
A cluster is a Storwize V7000 system that consists of two nodes, which are the node canisters inside a control enclosure.
Nodes per fabric
64
Maximum number of nodes that can be present on the same Fibre Channel fabric, with visibility of each other
I/O groups per cluster
1
Fabrics per cluster
4
The number of counterpart Fibre Channel SANs which are supported
Inter-cluster partnerships per cluster
3
A cluster may partnered with up to three remote clusters. No more than four clusters may be in the same connected set
Node Properties
Logins per node Fibre Channel port
512
Includes logins from server HBAs, disk controller ports, node ports within the same cluster and node ports from remote clusters
iSCSI sessions per node
256
512 in IP failover mode (when partner node is unavailable)
Managed Disk Properties
Managed disks (MDisks) per cluster
4096
The maximum number of logical units which can be managed by a cluster, including internal arrays.

This number also includes external MDisks which have not been configured into storage pools (managed disk groups)
Managed disks per storage pool (managed disk group)
128
Storage pools (managed disk groups) per cluster
128
Managed disk extent size
8192 MB
Capacity for an individual internal managed disk (array)
-
No limit is imposed beyond the maximum number of drives per array limits.
Capacity for an individual external managed disk
2 TB
Total storage capacity manageable per cluster
8 PB
Current limit is imposed by 2 TB external managed disk capacity limit.

Future support for external managed disks larger than 2 TB will allow for up to 32 PB of total storage capacity manageable per cluster.

This limit represents the per-cluster maximum of 2^22 extents.
Volume (Virtual Disk) Properties
Volumes (VDisks) per cluster
2048
Volumes per storage pool (managed disk group)
-
No limit is imposed beyond the volumes per-cluster limit
Fully-allocated volume capacity
256 TB
Maximum size for an individual fully-allocated volume
Thin-provisioned (space-efficient) volume capacity
256 TB
Maximum size for an individual thin-provisioned volume
Host mappings per cluster
20,000
See also - volume mappings per host object below.
Mirrored Volume (Virtual Disk) Properties
Copies per Volume
2
Volume copies per cluster
4096
Total mirrored volume capacity per I/O group
1024 TB
Generic Host Properties
Host objects (IDs) per cluster
256
A host object may contain both Fibre Channel ports and iSCSI names
Volume mappings per host object
512
Total Fibre Channel ports and iSCSI names per cluster
512
Total Fibre Channel ports and iSCSI names per host object
512
Fibre Channel Host Properties
Fibre Channel hosts per cluster
256
Fibre Channel host ports per cluster
512
Fibre Channel hosts ports per host object (ID)
512
iSCSI Host Properties
iSCSI hosts per cluster
256
iSCSI names per host object
256
Copy Services Properties
Remote Copy (Metro Mirror and Global
Mirror) relationships per
cluster
2048
This can be any mix of Metro Mirror and Global Mirror relationships.
Remote Copy relationships per consistency group
-
No limit is imposed beyond the Remote Copy relationships per cluster limit
Remote Copy consistency
groups per cluster
256
Total Metro Mirror and Global Mirror volume capacity per I/O group
1024 TB
This limit is the total capacity for all master and auxiliary volumes in the I/O group.
FlashCopy mappings per cluster
4096
FlashCopy targets
per source
256
Cascaded Incremental FlashCopy maps
4
A volume can be the source of up to 4 incremental FlashCopy maps. If this number of maps is exceeded then the FlashCopy behaviour for that cascade becomes non-incremental.
FlashCopy mappings
per consistency group
512
FlashCopy consistency
groups per cluster
127
FlashCopy volume capacity
2 TB
Maximum size for a FlashCopy source or target volume - this restriction is lifted in V6.1.0.9 and higher.
Total FlashCopy volume capacity per I/O group
1024 TB
Internal Storage Properties
SAS chains per control enclosure
2
Enclosures per SAS chain
5
see notes
Software version 6.1.0.7 and later supports attachment of up to 5 expansion enclosures on SAS port 1 and up to 4 expansion enclosures on SAS port 2.

Previous versions of the V7000 software are limited to 2 expansion enclosures per SAS chain.
Expansion enclosures per system
9
see notes
Software version 6.1.0.7 and later supports attachment of up to 9 expansion enclosures to 1 control enclosure, allowing a total of 10 enclosures per system.

Previous versions of the V7000 software are limited to 4 expansion enclosures, allowing a total of 5 enclosures per system.
Min-Max drives per enclosure
0-12
or
0-24
Limit depends on the enclosure model
RAID arrays per cluster
128
Min-Max member drives per RAID-0 array
1-8
Min-Max member drives per RAID-1 array
2-2
Min-Max member drives per RAID-5 array
3-16
Min-Max member drives per RAID-6 array
5-16
Min-Max member drives per RAID-10 array
2-16
Hot spare drives
-
No limit is imposed
External Storage System Properties
Storage system WWNNs per cluster
1024
Storage system WWPNs per cluster
1024
WWNNs per storage system
16
WWPNs per WWNN
16
LUNs (managed disks) per storage system
-
No limit is imposed beyond the managed disks per cluster limit
Cluster and User Management Properties
User accounts per cluster
400
Includes the default user accounts
User groups per cluster
256
Includes the default user groups
Authentication servers per cluster
1
NTP servers per cluster
1
iSNS servers per cluster
1
Concurrent open SSH sessions per cluster
10
Event Notification Properties
SNMP servers per cluster
6
Syslog servers per cluster
6
Email (SMTP) servers per cluster
6
Email servers are used in turn until the email is successfully sent
Email users (recipients) per cluster
12

Rate this page:

(0 users)Average rating

Add comments

Document information


More support for:

IBM Storwize V7000 (2076)
IBM Storwize V7000 (2076) 6.1

Version:

6.1.0

Operating system(s):

Platform Independent

Reference #:

S1003702

Modified date:

2012-02-08

Translate my page

Machine Translation

Content navigation