IBM Storwize V7000 6.1.0 Configuration Limits and Restrictions

Preventive Service Planning


This document lists the configuration limits and restrictions specific to IBM Storwize V7000 software version 6.1.0.


Storwize V7000 software versions to support attachment of up to 4 expansion enclosures per system. Software version and later removes this restriction, supporting attachment of up to 9 expansion enclosures, allowing a total of 10 enclosures per system.

DS4000 Maintenance

Storwize V7000 supports concurrent ESM firmware upgrades for those DS4000 models listed as such on the Supported Hardware List when they are running either or later controller firmware. However, controllers running firmware levels earlier than will not be supported for concurrent ESM upgrades. Customers in this situation, who wish to gain support for concurrent ESM upgrades, will need to first upgrade the DS4000 controller firmware level to This action is a controller firmware upgrade, not an ESM upgrade and concurrent controller firmware upgrades are already supported in conjunction with Storwize V7000. Once the controller firmware is at or later the ESM firmware can be upgraded concurrently.

Note: The ESM firmware upgrade must be done on one disk expansion enclosure at a time. A 10 minute delay from when one enclosure is upgraded to the start of the upgrade of another enclosure is required. Confirm via the Storage Manager applications "Recovery Guru" that the DS4000 status is in an optimal state before upgrading the next enclosure. If it is not, then do not continue ESM firmware upgrades until the problem is resolved.

Host Limitations

Windows NTP server

The Linux NTP client used by Storwize V7000 may not always function correctly with Windows W32Time NTP Server

Windows SAN Boot Clusters (MSCS):

It is possible to SAN Boot a Microsoft Cluster subject to the following restrictions imposed by Microsoft:
  • On Windows 2003, clustered disks and boot disks can be presented on the same storage bus, but ONLY if the Storport driver is being used.

These restrictions and more are described in the Microsoft White Paper: "Microsoft Windows Clustering: Storage Area Networks".

We have not tested, and therefore do not support, modifying the registry key as suggested on page 28 (which would allow boot disks and clustered disks on the same storage bus on Windows 2003 without the Storport driver).


Oracle Version and OS
Restrictions that apply:
Oracle RAC 10g on Windows: 1
Oracle RAC 10g on AIX: 1, 2
Oracle RAC 11g on AIX: 2
Oracle RAC 10g on HP-UX11.31: 1, 2
Oracle RAC 11g on HP-UX11.31: 1, 2
Oracle RAC 10g on HP-UX11.23: 1, 2
Oracle RAC 11g on HP-UX11.23: 1, 2
Oracle RAC 10g on Linux Host: 1, 3

Restriction 1: ASM cannot recognise the size change of the disk when Storwize V7000 disk is resized unless the disk is removed from ASM and included again.

Restriction 2: After an ASM disk group has successfully dropped a disk, the disk cannot be deleted from the OS. The workaround to the OS restriction is to bring down the ASM instance, delete the disk from the OS, and bring up the ASM instance again.

Restriction 3: For RHEL4 set Oracle Clusterware 'misscount' parameter to a bigger one to allow SDD to do path failover first. The default miscount setting 60s is too short for SDD. We recommend to set it 90s or 120s. Command to use: crsctl set css misscount 90

Maximum Configurations

Configuration limits for Storwize V7000 software version 6.1.0:

Maximum Number
Cluster Properties
Nodes per cluster (system)
A cluster is a Storwize V7000 system that consists of two nodes, which are the node canisters inside a control enclosure.
Nodes per fabric
Maximum number of nodes that can be present on the same Fibre Channel fabric, with visibility of each other
I/O groups per cluster
Fabrics per cluster
The number of counterpart Fibre Channel SANs which are supported
Inter-cluster partnerships per cluster
A cluster may partnered with up to three remote clusters. No more than four clusters may be in the same connected set
Node Properties
Logins per node Fibre Channel port
Includes logins from server HBAs, disk controller ports, node ports within the same cluster and node ports from remote clusters
iSCSI sessions per node
512 in IP failover mode (when partner node is unavailable)
Managed Disk Properties
Managed disks (MDisks) per cluster
The maximum number of logical units which can be managed by a cluster, including internal arrays.

This number also includes external MDisks which have not been configured into storage pools (managed disk groups)
Managed disks per storage pool (managed disk group)
Storage pools (managed disk groups) per cluster
Managed disk extent size
8192 MB
Capacity for an individual internal managed disk (array)
No limit is imposed beyond the maximum number of drives per array limits.
Capacity for an individual external managed disk
2 TB
Total storage capacity manageable per cluster
8 PB
Current limit is imposed by 2 TB external managed disk capacity limit.

Future support for external managed disks larger than 2 TB will allow for up to 32 PB of total storage capacity manageable per cluster.

This limit represents the per-cluster maximum of 2^22 extents.
Volume (Virtual Disk) Properties
Volumes (VDisks) per cluster
Volumes per storage pool (managed disk group)
No limit is imposed beyond the volumes per-cluster limit
Fully-allocated volume capacity
256 TB
Maximum size for an individual fully-allocated volume
Thin-provisioned (space-efficient) volume capacity
256 TB
Maximum size for an individual thin-provisioned volume
Host mappings per cluster
See also - volume mappings per host object below.
Mirrored Volume (Virtual Disk) Properties
Copies per Volume
Volume copies per cluster
Total mirrored volume capacity per I/O group
1024 TB
Generic Host Properties
Host objects (IDs) per cluster
A host object may contain both Fibre Channel ports and iSCSI names
Volume mappings per host object
Total Fibre Channel ports and iSCSI names per cluster
Total Fibre Channel ports and iSCSI names per host object
Fibre Channel Host Properties
Fibre Channel hosts per cluster
Fibre Channel host ports per cluster
Fibre Channel hosts ports per host object (ID)
iSCSI Host Properties
iSCSI hosts per cluster
iSCSI names per host object
Copy Services Properties
Remote Copy (Metro Mirror and Global
Mirror) relationships per
This can be any mix of Metro Mirror and Global Mirror relationships.
Remote Copy relationships per consistency group
No limit is imposed beyond the Remote Copy relationships per cluster limit
Remote Copy consistency
groups per cluster
Total Metro Mirror and Global Mirror volume capacity per I/O group
1024 TB
This limit is the total capacity for all master and auxiliary volumes in the I/O group.
FlashCopy mappings per cluster
FlashCopy targets
per source
Cascaded Incremental FlashCopy maps
A volume can be the source of up to 4 incremental FlashCopy maps. If this number of maps is exceeded then the FlashCopy behaviour for that cascade becomes non-incremental.
FlashCopy mappings
per consistency group
FlashCopy consistency
groups per cluster
FlashCopy volume capacity
2 TB
Maximum size for a FlashCopy source or target volume - this restriction is lifted in V6.1.0.9 and higher.
Total FlashCopy volume capacity per I/O group
1024 TB
Internal Storage Properties
SAS chains per control enclosure
Enclosures per SAS chain
see notes
Software version and later supports attachment of up to 5 expansion enclosures on SAS port 1 and up to 4 expansion enclosures on SAS port 2.

Previous versions of the V7000 software are limited to 2 expansion enclosures per SAS chain.
Expansion enclosures per system
see notes
Software version and later supports attachment of up to 9 expansion enclosures to 1 control enclosure, allowing a total of 10 enclosures per system.

Previous versions of the V7000 software are limited to 4 expansion enclosures, allowing a total of 5 enclosures per system.
Min-Max drives per enclosure
Limit depends on the enclosure model
RAID arrays per cluster
Min-Max member drives per RAID-0 array
Min-Max member drives per RAID-1 array
Min-Max member drives per RAID-5 array
Min-Max member drives per RAID-6 array
Min-Max member drives per RAID-10 array
Hot spare drives
No limit is imposed
External Storage System Properties
Storage system WWNNs per cluster
Storage system WWPNs per cluster
WWNNs per storage system
LUNs (managed disks) per storage system
No limit is imposed beyond the managed disks per cluster limit
Cluster and User Management Properties
User accounts per cluster
Includes the default user accounts
User groups per cluster
Includes the default user groups
Authentication servers per cluster
NTP servers per cluster
iSNS servers per cluster
Concurrent open SSH sessions per cluster
Event Notification Properties
SNMP servers per cluster
Syslog servers per cluster
Email (SMTP) servers per cluster
Email servers are used in turn until the email is successfully sent
Email users (recipients) per cluster

Rate this page:

(0 users)Average rating

Document information

More support for:

IBM Storwize V7000 (2076)
IBM Storwize V7000 (2076) 6.1



Operating system(s):

Platform Independent

Reference #:


Modified date:


Translate my page

Machine Translation

Content navigation