V7.3 Configuration Limits and Restrictions for IBM Storwize V7000

Preventive Service Planning


Abstract

This document lists the configuration limits and restrictions specific to IBM Storwize V7000 software version 7.3

Content

Clustered Systems
A Storwize V7000 system at version 7.3 and higher requires either native Fibre Channel SAN or Fibre Channel over Ethernet ( FCoE ) connectivity for communication between all nodes in the local cluster.

Where only Fibre Channel over Ethernet ( FCoE ) is used for creating a clustered system support is available only for a specific set of FCoE capable switches. Please refer to the notes column of the switch tables on the V7.3.x Supported Hardware List, Device Driver, Firmware and Recommended Software Levels for Storwize V7000 or the notes section of the supported switch types in SSIC for detail of supported switch models for FCoE Clustering.



For FCoE capable switches that are not listed as supported for FCoE Clustering, the switches can be used to optionally create a separate zone to provide additional redundancy for intra-cluster communications, however, Fibre Channel SAN connectivity is still required.

Partnerships between systems for Metro Mirror or Global Mirror replication can be used with both Fibre Channel, Native Ethernet connectivity and FCoE connectivity, however direct FCoE links are only supported to a maximum of 300 metres. Distances greater than 300 metres are only supported when using an FCIP link or Fibre Channel between source and target.

Cisco Nexus
The minimum level of Cisco Nexus firmware supported for FCoE with the IBM Storwize V7000 Gen2 is 5.2(1)N1(2a).


IP Partnership

Using an Ethernet Switch to convert a 10Gbps IP partnership link to 1Gbps link and vice versa is not supported. Therefore, the IP infrastructure on the two partnership sites should both be 1Gbps or 10Gbps. However, bandwidth limiting on 10Gbps and 1Gbps IP partnership between sites is supported.

Fabric Limitations
Only one FCF ( Fibre Channel Forwarder ) switch per fabric is supported.

DS4000 Maintenance

Storwize V7000 supports concurrent ESM firmware upgrades for those DS4000 models listed as such on the Supported Hardware List when they are running either 06.23.05.00 or later controller firmware. However, controllers running firmware levels earlier than 06.23.05.00 will not be supported for concurrent ESM upgrades. Customers in this situation, who wish to gain support for concurrent ESM upgrades, will need to first upgrade the DS4000 controller firmware level to 06.23.05.00. This action is a controller firmware upgrade, not an ESM upgrade and concurrent controller firmware upgrades are already supported in conjunction with Storwize V7000. Once the controller firmware is at 06.23.05.00 or later the ESM firmware can be upgraded concurrently.

Note: The DS4000 ESM firmware upgrade must be done on one disk expansion enclosure at a time. A 10 minute delay from when one enclosure is upgraded to the start of the upgrade of another enclosure is required. Confirm via the Storage Manager applications "Recovery Guru" that the DS4000 status is in an optimal state before upgrading the next enclosure. If it is not, then do not continue ESM firmware upgrades until the problem is resolved.


Host Limitations

Windows NTP server

The Linux NTP client used by Storwize V7000 may not always function correctly with Windows W32Time NTP Server


Windows SAN Boot Clusters (MSCS):

It is possible to SAN boot a Microsoft Cluster subject to the following restrictions imposed by Microsoft:
  • On Windows 2003, clustered disks and boot disks can be presented on the same storage bus, but ONLY if the Storport driver is being used.

These restrictions and more are described in the Microsoft White Paper: "Microsoft Windows Clustering: Storage Area Networks".

IBM have not tested, and therefore do not support, modifying the registry key as suggested on page 28 (which would allow boot disks and clustered disks on the same storage bus on Windows 2003 without the Storport driver).


Oracle


Oracle Version and OS

Restrictions that apply:

Oracle RAC 10g on Linux Host:

1

Restriction 1: For RHEL4 set Oracle Clusterware 'misscount' parameter to a bigger one to allow SDD to do path failover first. The default miscount setting 60s is too short for SDD. We recommend to set it 90s or 120s. Command to use: crsctl set css misscount 90



Maximum Configurations

Configuration limits for Storwize V7000:

Property
Maximum Number
Comments
System (Cluster) Properties
Control enclosures per system (cluster)
4
Each control enclosure contains two node canisters
Nodes per system
8
Arranged as four I/O groups
Nodes per fabric
64
Maximum number of maximum number of SVC or Storwize family system nodes that can be present on the same Fibre Channel fabric, with visibility of each other
Fabrics per V7000 system
6
The number of counterpart Fibre Channel SANs which are supported
- Up to 4 fabrics using native Fibre Channel ports
- Up to 2 fabrics using FCoE ports
Fabrics per V7000 Gen 2 system
8
The number of counterpart SANs which are supported
Inter-cluster partnerships per system
3
A system may be partnered with up to three remote systems. No more than four systems may be in the same connected set
Node Properties
Logins per node Fibre Channel port
512
Includes logins from server HBAs, disk controller ports, node ports within the same system and node ports from remote systems
iSCSI sessions per node
256
512 in IP failover mode (when partner node is unavailable)
Managed Disk Properties
Managed disks (MDisks) per system
4096
The maximum number of logical units which can be managed by a system, including internal arrays.

This number also includes external MDisks which have not been configured into storage pools (managed disk groups)
Managed disks per storage pool (managed disk group)
128
Storage pools per system
128
Managed disk extent size
8192 MB
Capacity for an individual internal managed disk (array)
1 PB
No limit is imposed beyond the maximum number of drives per array limits.
Maximum size is dependent on the extent size of the Storage Pool.
Comparison Table: Maximum Volume, MDisk and System capacity for each extent size.
Capacity for an individual external managed disk
1 PB
Note: External managed disks larger than 2 TB are only supported for certain types of storage systems. Refer to the supported hardware matrix for further details.
Maximum size is dependent on the extent size of the Storage Pool.
Comparison Table: Maximum Volume, MDisk and System capacity for each extent size.
Total storage capacity manageable per system
32 PB
Maximum requires an extent size of 8192 MB to be used

This limit represents the per system maximum of 2^22 extents.

Comparison Table: Maximum Volume, MDisk and System capacity for each extent size.
Volume (Virtual Disk) Properties
Volumes (VDisks) per system
8192
Maximum requires a system containing four control enclosures; refer to the volumes per I/O group limit below
Volumes per I/O group
(volumes per caching I/O group)
2048
Volumes accessible per I/O group
8192
Thin-provisioned (space-efficient) volume copies per system
8192
No limit is imposed here beyond the volume copies per system limit.
Compressed volume copies per system
800
Maximum requires a system containing four control enclosures; refer to the compressed volume copies per I/O group limit below
Compressed volume copies per I/O group
200
Volumes per storage pool
-
No limit is imposed beyond the volumes per system limit
Fully-allocated volume capacity
256 TB
Maximum size for an individual fully-allocated volume.

Maximum size is dependent on the extent size of the Storage Pool.
Comparison Table: Maximum Volume, MDisk and System capacity for each extent size.
Thin-provisioned (space-efficient) volume capacity
256 TB
Maximum size for an individual thin-provisioned volume.

Maximum size is dependent on the extent size of the Storage Pool.
Comparison Table: Maximum Volume, MDisk and System capacity for each extent size.
Host mappings per system
20,000
See also - volume mappings per host object below
Mirrored Volume (Virtual Disk) Properties
Copies per volume
2
Volume copies per system
8192
Total mirrored volume capacity per I/O group
1024 TB
Generic Host Properties
Host objects (IDs) per system
2048
A host object may contain both Fibre Channel ports and iSCSI names
Host objects (IDs) per I/O group
512
Refer to the additional Fibre Channel and iSCSI host limits below
Volume mappings per host object
2048*
* This limit is 512 for Solaris OS
Total Fibre Channel ports and iSCSI names per system
8192
Total Fibre Channel ports and iSCSI names per I/O group
2048
Total Fibre Channel ports and iSCSI names per host object
512
Fibre Channel Host Properties (including hosts attached using FCoE)
Fibre Channel hosts per system
2048
Fibre Channel host ports per system
4096
Fibre Channel hosts per I/O group
512
Fibre Channel host ports per I/O group
1024
Fibre Channel hosts ports per host object (ID)
512
iSCSI Host Properties
iSCSI hosts per system
1024
iSCSI hosts per I/O group
256
iSCSI names per host object (ID)
256
iSCSI names per I/O group
256
Copy Services Properties
Remote Copy (Metro Mirror and Global
Mirror) relationships per
system
8192
This can be any mix of Metro Mirror and Global Mirror relationships.
Remote Copy relationships per consistency group
-
No limit is imposed beyond the Remote Copy relationships per system limit
Remote Copy consistency
groups per system
256
Total Metro Mirror and Global Mirror volume capacity per I/O group
1024 TB
This limit is the total capacity for all master and auxiliary volumes in the I/O group.
Total number of Global Mirror with Change Volumes relationships per system
256
FlashCopy mappings per system
4096
FlashCopy targets
per source
256
FlashCopy mappings
per consistency group
512
FlashCopy consistency
groups per system
127
Storwize V7000 versions up to and including 7.3.0.4
255
Storwize V7000 version 7.3.0.5 and higher
Total FlashCopy volume capacity per I/O group
1024 TB
IP Partnership Properties
Inter-cluster IP partnerships per system
1
A system may be partnered with up to three remote systems. A maximum of one of those can be IP and the other two FC.
I/O groups per system
2
The nodes from a maximum of two I/O groups per system can be used for IP partnership.
Inter site links per IP partnership
2
A maximum of two inter site links can be used between two IP partnership sites.
Ports per node
1
A maximum of one port per node can be used for IP partnership.
Internal Storage Properties
SAS chains per control enclosure
2
Expansion enclosures per Storwize V7000 SAS chain
5
see notes
Up to 5 expansion enclosures on SAS port 1 and up to 4 expansion enclosures on SAS port 2
Expansion enclosures per Storwize V7000 control enclosure
9
Drives per Storwize V7000 I/O group
240
Drives per Storwize V7000 system
960
Maximum requires a system containing four control enclosures, each with the maximum number of expansion enclosures
Expansion enclosures per Storwize V7000 Gen 2 SAS chain
10
Expansion enclosures per Storwize V7000 Gen 2 control enclosure
20
Drives per Storwize V7000 Gen 2 I/O group
504
Drives per Storwize V7000 Gen 2 system
1056
Min-Max drives per enclosure
0-12
or
0-24
Limit depends on the enclosure model
RAID arrays per system
128
Min-Max member drives per RAID-0 array
1-8
Min-Max member drives per RAID-1 array
2-2
Min-Max member drives per RAID-5 array
3-16
Min-Max member drives per RAID-6 array
5-16
Min-Max member drives per RAID-10 array
2-16
Hot spare drives
-
No limit is imposed
External Storage System Properties
Storage system WWNNs per system (cluster)
1024
Storage system WWPNs per system (cluster)
1024
WWNNs per storage system
16
WWPNs per WWNN
16
LUNs (managed disks) per storage system
-
No limit is imposed beyond the managed disks per system limit
System and User Management Properties
User accounts per system
400
Includes the default user accounts
User groups per system
256
Includes the default user groups
Authentication servers per system
1
NTP servers per system
1
iSNS servers per system
1
Concurrent open SSH sessions per system
10
Event Notification Properties
SNMP servers per system
6
Syslog servers per system
6
Email (SMTP) servers per system
6
Email servers are used in turn until the email is successfully sent
Email users (recipients) per system
12
LDAP servers per system
6


Extents

The following table compares the maximum volume, mdisk and system capacity for each extent size.

Extent size (MB)
Maximum non thin-provisioned volume capacity in GB
Maximum thin-provisioned volume capacity in GB
Maximum MDisk capacity in GB
Total storage capacity manageable per system*
16
2048 (2 TB)
2000
2048 (2 TB)
64 TB
32
4096 (4 TB)
4000
4096 (4 TB)
128 TB
64
8192 (8 TB)
8000
8192 (8 TB)
256 TB
128
16,384 (16 TB)
16,000
16,384 (16 TB)
512 TB
256
32,768 (32 TB)
32,000
32,768 (32 TB)
1 PB
512
65,536 (64 TB)
65,000
65,536 (64 TB)
2 PB
1024
131,072 (128 TB)
130,000
131,072 (128 TB)
4 PB
2048
262,144 (256 TB)
260,000
262,144 (256 TB)
8 PB
4096
262,144 (256 TB)
262,144
524,288 (512 TB)
16 PB
8192
262,144 (256 TB)
262,144
1,048,576 (1024 TB)
32 PB
* The total capacity values assumes that all of the storage pools in the system use the same extent size.

Rate this page:

(0 users)Average rating

Document information


More support for:

IBM Storwize V7000 (2076)
7.3

Version:

7.3

Operating system(s):

IBM Storwize V7000

Reference #:

S1004628

Modified date:

2014-09-05

Translate my page

Machine Translation

Content navigation