IBM Support

V1.3 Configuration Limits and Restrictions for IBM Storwize V7000 Unified

Preventive Service Planning


Abstract

This document lists the configuration limits and restrictions specific to IBM Storwize V7000 Unified software version 1.3

Content





Maximum Configurations

Configuration limits for Storwize V7000 Unified software version 1.3:

Property
Limit
Comments
File Module Properties
Maximum number of file modules
2
Maximum size of a single shared file system
8 PB

Maximum number of file systems within one system
64
Maximum size of a single file
8 PB

Maximum number of files per file system
4 x 109
Maximum number of snapshots per file system
256

Maximum number of snapshots per file set
256

Maximum number of exports that can be created per service
1000
Supported services are CIFS, NFS, FTP, SCP and HTTPS
Maximum number of administrative user groups
128

Maximum number of administrative users per administrative user group
30

Maximum number of administrative user groups per administrative user
30

Maximum number of different authentication server integrations
1
Supported authentication servers are AD, LDAP or Samba PDC


Configuration limits for Storwize V7000 software version 6.3.0:

Property
Limit
Comments
System (Cluster) Properties
Control enclosures per system (cluster)
1
Each control enclosure contains two node canisters
Nodes per system
2
Arranged as one I/O group
Nodes per fabric
64
Maximum number of SVC and V7000 nodes that can be present on the same Fibre Channel fabric, with visibility of each other
Fabrics per system
2
The number of counterpart Fibre Channel SANs which are supported
Inter-cluster partnerships per system
3
A system may partnered with up to three remote systems. No more than four systems may be in the same connected set
Node Properties
Logins per node Fibre Channel port
512
Includes logins from server HBAs, disk controller ports, node ports within the same system and node ports from remote systems
iSCSI sessions per node
256
512 in IP failover mode (when partner node is unavailable)
Managed Disk Properties
Managed disks (MDisks) per system
4096
The maximum number of logical units which can be managed by a system, including internal arrays.

This number also includes external MDisks which have not been configured into storage pools (managed disk groups)
Managed disks per storage pool (managed disk group)
128

Storage pools per system
128

Managed disk extent size
8192 MB

Capacity for an individual internal managed disk (array)
1 PB
No limit is imposed beyond the maximum number of drives per array limits.
Maximum size is dependent on the extent size of the Storage Pool.
Comparison Table: Maximum Volume, MDisk and System capacity for each extent size.
Capacity for an individual external managed disk
1 PB
Note: External managed disks larger than 2 TB are only supported for certain types of storage systems. Refer to the supported hardware matrix for further details.
Maximum size is dependent on the extent size of the Storage Pool.
Comparison Table: Maximum Volume, MDisk and System capacity for each extent size.
Total storage capacity manageable per system
32 PB
Maximum requires an extent size of 8192 MB to be used

This limit represents the per system maximum of 2^22 extents.

Comparison Table: Maximum Volume, MDisk and System capacity for each extent size.
Volume (Virtual Disk) Properties
Volumes (VDisks) per system
2048

Volumes per storage pool
-
No limit is imposed beyond the volumes per system limit
Fully-allocated volume capacity
256 TB
Maximum size for an individual fully-allocated volume.

Maximum size is dependent on the extent size of the Storage Pool.
Comparison Table: Maximum Volume, MDisk and System capacity for each extent size.
Thin-provisioned (space-efficient) volume capacity
256 TB
Maximum size for an individual thin-provisioned volume.

Maximum size is dependent on the extent size of the Storage Pool.
Comparison Table: Maximum Volume, MDisk and System capacity for each extent size.
Host mappings per system
20,000
See also - volume mappings per host object below
Mirrored Volume (Virtual Disk) Properties
Copies per volume
2

Volume copies per system
4096

Total mirrored volume capacity per I/O group
1024 TB

Generic Host Properties
Host objects (IDs) per system
256
A host object may contain both Fibre Channel ports and iSCSI names
Volume mappings per host object
512

Total Fibre Channel ports and iSCSI names per system
512

Total Fibre Channel ports and iSCSI names per host object
512

Fibre Channel Host Properties
Fibre Channel hosts per system
256

Fibre Channel host ports per system
512

Fibre Channel hosts ports per host object (ID)
512

iSCSI Host Properties
iSCSI hosts per system
256

iSCSI names per host object
256

iSCSI names per system
256

Copy Services Properties
Remote Copy (Metro Mirror and Global
Mirror) relationships per
system
2048
This can be any mix of Metro Mirror and Global Mirror relationships.
Remote Copy relationships per consistency group
-
No limit is imposed beyond the Remote Copy relationships per system limit
Remote Copy consistency
groups per system
256

Total Metro Mirror and Global Mirror volume capacity per I/O group
1024 TB
This limit is the total capacity for all master and auxiliary volumes in the I/O group.

Note: Do not use volumes larger than 2 TB in Global Mirror with Change Volumes relationships
Total number of Global Mirror with Change Volumes relationships per system
256

FlashCopy mappings per system
4096

FlashCopy targets
per source
256

FlashCopy mappings
per consistency group
512

FlashCopy consistency
groups per system
127

Total FlashCopy volume capacity per I/O group
1024 TB

Internal Storage Properties
SAS chains per control enclosure
2

Enclosures per SAS chain
5
see notes
Up to 5 expansion enclosures on SAS port 1 and up to 4 expansion enclosures on SAS port 2
Expansion enclosures per control enclosure
9

Drives per system
240

Min-Max drives per enclosure
0-12
or
0-24
Limit depends on the enclosure model
RAID arrays per system
128

Min-Max member drives per RAID-0 array
1-8

Min-Max member drives per RAID-1 array
2-2

Min-Max member drives per RAID-5 array
3-16

Min-Max member drives per RAID-6 array
5-16

Min-Max member drives per RAID-10 array
2-16

Hot spare drives
-
No limit is imposed
External Storage System Properties
Storage system WWNNs per system
1024

Storage system WWPNs per system
1024

WWNNs per storage system
16

WWPNs per WWNN
16

LUNs (managed disks) per storage system
-
No limit is imposed beyond the managed disks per system limit
iSCSI System Properties
iSNS servers per system
1

Storwize V7000 Event Notification Properties
SNMP servers per system
6

Syslog servers per system
6

Email (SMTP) servers per system
6
Email servers are used in turn until the email is successfully sent
Email users (recipients) per system
12


Extents

The following table compares the maximum volume, mdisk and system capacity for each extent size.

Extent size (MB)
Maximum non thin-provisioned volume capacity in GB
Maximum thin-provisioned volume capacity in GB
Maximum MDisk capacity in GB
Total storage capacity manageable per system*
16
2048 (2 TB)
2000
2048 (2 TB)
64 TB
32
4096 (4 TB)
4000
4096 (4 TB)
128 TB
64
8192 (8 TB)
8000
8192 (8 TB)
256 TB
128
16,384 (16 TB)
16,000
16,384 (16 TB)
512 TB
256
32,768 (32 TB)
32,000
32,768 (32 TB)
1 PB
512
65,536 (64 TB)
65,000
65,536 (64 TB)
2 PB
1024
131,072 (128 TB)
130,000
131,072 (128 TB)
4 PB
2048
262,144 (256 TB)
260,000
262,144 (256 TB)
8 PB
4096
262,144 (256 TB)
262,144
524,288 (512 TB)
16 PB
8192
262,144 (256 TB)
262,144
1,048,576 (1024 TB)
32 PB
* The total capacity values assumes that all of the storage pools in the system use the same extent size.




Restrictions

Configuration Restrictions


Function Restrictions
    Restrictions for Storwize V7000 Unified software versions 1.3.0.0 - 1.3.0.2
    When running software versions 1.3.0.0, 1.3.0.1 or 1.3.0.2, the following functions are available for pre-production use only:
    • Use of asynchronous file replication between Storwize V7000 Unified systems.
      (Note: use of the Storwize V7000 6.3.0 Metro Mirror and Global Mirror functions, for replicating block volumes between Storwize V7000 systems, is not affected by this restriction)
    • Use of Network Data Management Protocol (NDMP) for file backup/restore functions.
      (Note: use of IBM Tivoli Storage Manager (TSM) client with an external TSM server for backup/restore operations is not affected by this restriction)
    • Use of the GPFS Information Lifecycle Management functions, for file placement, migration and deletion on internal or external disk.
    • Use of IBM Tivoli Storage Manager for Space Management as a Hierarchical Storage Manager (HSM), for migrating data to an external TSM server.

    These restrictions were removed with Storwize V7000 Unified software version 1.3.0.3 and later, allowing use of all functions in production environments.

    Restriction for Storwize V7000 Unified software versions 1.3.0.3
    If Storwize V7000 Unified is configured for authentication with Microsoft Active Directory, then usage of SCP protocol for file I/O access is not supported.

    This restriction was removed with Storwize V7000 Unified version 1.3.1.0 and later.



    Hierarchical Storage Management (HSM)
    Use of file sets for hierarchical storage management (HSM) enabled file systems is not supported for Storwize V7000 Unified version 1.3.2.0 and below. Additional information is available in the following technote: File Sets are Not Supported for HSM Managed File Systems

    Once the file system in Storwize V7000 Unified system is HSM enabled, then replacing/exchanging TSM server may lead to data loss. Additional information is available in the following technote: Switching of TSM Server for HSM Enabled Storwize V7000 Unified System may Lead to Data Loss
    Asynchronous Replication behaviour with filesets
    File set definitions and associated information such as quotas are not replicated to the target system. The directory structure of the source file set, all files and extended attribute information contained within are replicated to the target, though they will be handled as a normal directory on the target.

    When a file set within the source file system is unlinked, it is still held by the source file system but it disappears from the source directory tree. If a replication process is run during the time when a source file set is unlinked, the file tree will appear as if those files have been removed and they will therefore be removed from the target system, as to match the current state of the source. This may cause a significant amount of data to be removed from the target system and would result in this data being unavailable in the event of a disaster at the source location.

    Upon re-linking the file set on the source, the file tree will appear again and the next replication cycle will behave as though the entire file tree was just created, resulting in those files being replicated to the target. This may cause a significant amount of data to be resent to the target to bring it back into synchronization with the source system.

    Refer to the Storwize V7000 Unified Information Center for additional information on managing asynchronous replication, including authentication and networking requirements.

    Asynchronous Replication across Code Levels
    Asynchronous replication is not supported across different code levels. The source machine and the target machine should have the same code level.

Storwize V7000 Unified Software Upgrade

Storwize V7000 Unified software upgrades can be performed concurrently with the following methods of host access:
  • Block I/O access:
    • Fibre Channel
    • iSCSI
  • File I/O access:
    • NFS (using hard mounts)

Host systems using CIFS, HTTPS, SCP or FTP for file I/O access may experience brief interruptions during the upgrade process and subsequently need to reconnect after the upgrade has completed. It is recommended that any applications accessing file shares using these protocols be quiesced before starting a software upgrade.


Storwize V7000 Block Function Restrictions for File System Volumes

The table below shows which V7000 block functions can be used for file system volumes:

V7000 block functionsSupported usage
Shared storage pools for block and file volumesYes
Easy Tier, when using solid-state drives (SSDs)Yes
External storage virtualizationYes. Refer to CIFS Planning Guidance for IBM Storwize V7000 Unified for additional restrictions.
Remote Copy (Metro Mirror, Global Mirror)No; use V7000 Unified asynchronous file replication for file workloads
FlashCopyNo; use GPFS snapshots for file workloads
Volume mirroringNo; use GPFS file system replication for file workloads
Thin provisioning (space-efficient volumes)No; use GPFS file placement and migration features for file workloads
Volume expand/shrinkNo; use GPFS to expand/shrink file system capacity
Volume migration between storage poolsNo; use GPFS functionality to manage data placement

Note: volume migration within a storage pool, e.g. as a result of removing an mdisk or array, is supported.
Image-mode volumesNo


DS4000 Maintenance

Storwize V7000 Unified supports concurrent ESM firmware upgrades for those DS4000 models listed as such on the Storwize V7000 Supported Hardware List when they are running either 06.23.05.00 or later controller firmware. However, controllers running firmware levels earlier than 06.23.05.00 will not be supported for concurrent ESM upgrades. Customers in this situation, who wish to gain support for concurrent ESM upgrades, will need to first upgrade the DS4000 controller firmware level to 06.23.05.00. This action is a controller firmware upgrade, not an ESM upgrade and concurrent controller firmware upgrades are already supported in conjunction with Storwize V7000. Once the controller firmware is at 06.23.05.00 or later the ESM firmware can be upgraded concurrently.

Note: The DS4000 ESM firmware upgrade must be done on one disk expansion enclosure at a time. A 10 minute delay from when one enclosure is upgraded to the start of the upgrade of another enclosure is required. Confirm via the Storage Manager applications "Recovery Guru" that the DS4000 status is in an optimal state before upgrading the next enclosure. If it is not, then do not continue ESM firmware upgrades until the problem is resolved.


Host Limitations for File I/O Access

CIFS access to file systems is supported only for volumes that are placed on Storwize V7000 internal storage.


Host Limitations for Block I/O Access

Windows SAN Boot Clusters (MSCS):

It is possible to SAN Boot a Microsoft Cluster subject to the following restrictions imposed by Microsoft:
  • On Windows 2003, clustered disks and boot disks can be presented on the same storage bus, but ONLY if the Storport driver is being used.

These restrictions and more are described in the Microsoft White Paper: "Microsoft Windows Clustering: Storage Area Networks".

We have not tested, and therefore do not support, modifying the registry key as suggested on page 28 (which would allow boot disks and clustered disks on the same storage bus on Windows 2003 without the Storport driver).


Oracle

Oracle Version and OS
Restrictions that apply:
Oracle RAC 10g on Windows:
1
Oracle RAC 10g on AIX:
1, 2
Oracle RAC 11g on AIX:
2
Oracle RAC 10g on HP-UX11.31:
1, 2
Oracle RAC 11g on HP-UX11.31:
1, 2
Oracle RAC 10g on HP-UX11.23:
1, 2
Oracle RAC 11g on HP-UX11.23:
1, 2
Oracle RAC 10g on Linux Host:
1, 3

Restriction 1: ASM cannot recognise the size change of the disk when Storwize V7000 disk is resized unless the disk is removed from ASM and included again.

Restriction 2: After an ASM disk group has successfully dropped a disk, the disk cannot be deleted from the OS. The workaround to the OS restriction is to bring down the ASM instance, delete the disk from the OS, and bring up the ASM instance again.

Restriction 3: For RHEL4 set Oracle Clusterware 'misscount' parameter to a bigger one to allow SDD to do path failover first. The default miscount setting 60s is too short for SDD. We recommend to set it 90s or 120s. Command to use: crsctl set css misscount 90

[{"Product":{"code":"ST5Q4U","label":"IBM Storwize V7000 Unified (2073-700)"},"Business Unit":{"code":"BU058","label":"IBM Infrastructure w\/TPS"},"Component":"1.3","Platform":[{"code":"PF025","label":"Platform Independent"}],"Version":"1.3","Edition":"","Line of Business":{"code":"","label":""}}]

Document Information

Modified date:
17 June 2018

UID

ssg1S1003906