IBM Support

V8.2.1.x Configuration Limits and Restrictions for IBM Storwize V7000 and V7000F

Preventive Service Planning


Abstract

This document lists the configuration limits and restrictions specific to IBM Storwize V7000 software version 8.2.1.x

Content

V8.2.x does not support V7000 Gen 1. Customers with V7000 Gen 1 IO groups cannot upgrade to v8.2.x

V8.2.1 does not support iSER Clustering on V7000 Gen 2 or Gen 2+ systems.

V8.2.1 does not support the Persistent Node IP feature of Node Rescue scenarios on V7000 Gen 2 or Gen 2+ systems.

V8.2.1 does not support Transparent Cloud Tiering.

V8.2.1.0 introduces support for NVMe over Fibre Chanel (FC-NVMe) host attachment. Refer to the 'NVMe over Fibre Channel host properties' in the Maximum Configurations table below before attaching FC-NVMe hosts: the configuration limits for FC-NVMe are lower than for SCSI (FC/iSCSI/SAS) hosts and FC-NVMe host attachment reduces the maximum supported number of SCSI hosts.

The use of WAN optimisation devices such as Riverbed are not supported in native Ethernet IP partnership configurations containing Storwize V7000.


Data Reduction Pools

The following restrictions apply for Data Reduction Pools (DRP):
  1. Child pools are not supported in a DRP;
  2. VVOL is not supported in a DRP (because child pool is not supported);
  3. A volume in a DRP cannot be shrunk;
  4. No volume move between I/O groups if volume in a DRP (use FlashCopy or Metro Mirror/Global Mirror instead);
  5. No split of a volume mirror to copy in a different I/O group;
  6. Real/used/free/free/tier capacity are not reported per volume - only per pool.

Note: These restrictions are applicable to all the versions of IBM Spectrum Virtualize v8.1.2 and later


REST API

Customers using the REST API to list more than 2000 objects may experience a loss of service, from the API, as it restarts due to memory constraints.

It is not possible to access the REST API using a cluster's IPv6 address.


NVMe over Fibre Chanel

Hosts using the NVMe protocol cannot be mapped to HyperSwap or stretched volumes.

Volumes accessed by hosts using the NVMe protocol cannot be configured with multiple access I/O groups due to a limitation of the NVMe protocol.


RAID and Distributed RAID

Storwize V7000 Gen 3 systems support DRAID5, DRAID6 and RAID10. RAID5 and RAID6 are not supported.


DRAID Strip Size

For candidate drives, with a capacity greater than 4TB, a strip size of 128 cannot be specified for either RAID-5 or RAID-6 DRAID arrays. For these drives a strip size of 256 should be used.


Non-Disruptive Volume Move (NDVM)
The following Fibre Channel attached host types are supported for non-disruptively moving a volume between I/O groups:

Host Operating System Host Multipathing Host Clustering Notes
AIX 7.2 AIXPCM
Non-disruptive volume move may result in the same volume being mapped to different hosts in the same host cluster using different SCSI IDs. If the host cluster cannot tolerate this configuration then non-disruptive volume move cannot be used.
SAN boot is supported
NPIV is supported
Microsoft Windows 2019 MSDSM
Hyper-V Failover Cluster
SAN boot is supported
Microsoft Windows 2016 MSDSM
Hyper-V Failover Cluster
SAN Boot is supported
RedHat 8 Native
The original paths may need to be manually removed on the host after removing access to the old I/O group
SLES 15 Native The original paths may need to be manually removed on the host after removing access to the old I/O group
VMware 6.7 Native VAAI is supported
VMware 6.5 Native VAAI is supported
Solaris 11.3 SPARC MPXIO SAN boot is supported

Note: For all other host types, I/O should be quiesced when moving a volume.

When moving a volume that is mapped to a host cluster then you must rescan disk paths on all host cluster nodes to ensure the new paths have been detected before removing access from the original I/O group.


NPIV ( N_Port ID Virtualization )

SAN Volume Controller and Storwize Version 7.7 introduced support for NPIV ( N_Port ID Virtualization ) for Fibre Channel fabric attachment. FCoE is not supported with NPIV. The following recommendations and restrictions should be followed when implementing the NPIV feature.

Operating systems not currently supported for use with NPIV:

  • RHEL6 and earlier on IBM Power
  • HPUX 11iV2
  • Veritas DMP multipathing on Windows with RAID-5 volumes in VxVM

General requirements

Required SDD versions for IBM AIX and Microsoft Windows Environments:

  1. IBM AIX Operating Systems require a minimum SDDPCM version of 2.6.8.0 or AIXPCM;
  2. Microsoft Windows requires a minimum SDDDSM version 2.4.7.0. The latest recommended level which resolves issues listed below is 2.4.7.1.

Path Optimization

User intervention may be required when changing NPIV states from "Transitional" to "Disabled". All Paths to a LUN with SDDDSM or SDDPCM may remain "Non-Optimized" when NPIV is "Disabled" from "Transitional" state.

To resolve this issue please use the following instructions:

IBM AIX
For SDDPCM:
    Run "pcmpath chgprefercntl device <device number>/<device number range>" on AIX. This will restore both Optimized and Non-Optimized paths for all the LUNs correctly.

Windows 2008 and 2012
For SDDDSM:
    Run "datapath rescanhw" on Windows. This will restore both Optimized and Non-Optimized paths for all the LUN's correctly.
This issue is resolved with SDDDSM version 2.4.7.1

Windows 2008 and 2012 Non-Preferred Paths with SDDDSM
When NPIV enters into Transitional state from Disabled, with all the SDDDSM paths in Non-Preferred state, the paths to the Virtual ports also become Non-Preferred. This path configuration might cause IO failures as soon as NPIV moves into "Enabled" state.
As a work around user should configure "at least one preferred path" to each LUN, when in NPIV "Disabled" state.
This issue is resolved with SDDDSM version 2.4.7.1

Solaris
Emulex HBA Settings:
1. When implementing NPIV on Solaris 11 the default disk IO timeout needs to be changed to 120s by adding "set sd:sd_io_time=120" in /etc/system file, A system reboot is required for the change to be implemented.
2. When ports on host HBA are connected to 16GB SAN, NPIV is not supported.

Other Operating Systems
Other operating Systems may also experience the same issue when changing the NPIV state from "Transitional" to "Disabled" in which case the operating system specific rescan command should be used.

Fabric Attachment
NPIV mode on SVC or Storwize is only supported when used with Brocade or Cisco fibre channel SAN switches which are NPIV capable.


Nodes in an IO group cannot be replaced by nodes with less memory when compressed volumes are present

If a customer must migrate from 64GB to 32GB memory node canisters in an IO group, they will have to remove all compressed volume copies in that IO group. This restriction applies to 7.7.0.0 and newer software.

A customer must not:

  1. Create an IO group with node canisters with 64GB of memory.
  2. Create compressed volumes in that IO group.
  3. Delete both node canisters from the system with CLI or GUI.
  4. Install new node canisters with 32GB of memory and add them to the configuration in the original IO group with CLI or GUI.


HyperSwap

When using the HyperSwap function please configure your host multipath driver to use an ALUA-based path policy.

Due to the requirement for multiple access IO groups, SAS attached host types are not supported with HyperSwap volumes.

A volume configured with multiple access I/O groups, on a system in the storage layer, cannot be virtualized by a system in the replication layer.  This restriction prevents a HyperSwap volume on one system being virtualized by another.

AIX Live Partition Mobility (LPM)
AIX LPM is supported with the HyperSwap function and AIX 7.x 


Clustered Systems

A Storwize V7000 system requires native Fibre Channel SAN or alternatively 8Gbps/16Gbps Direct Attach Fibre Channel connectivity for communication between all nodes in the local cluster. Fibre Channel over Ethernet ( FCoE ) connectivity for communication between all nodes in the local cluster is also supported. Clustering can also be accomplished with 25Gbps Ethernet, for standard topologies. For HyperSwap topologies a SCORE request will be required. Please contact your IBM representative to raise a SCORE request.

Partnerships between systems for Metro Mirror or Global Mirror replication can be used with both Fibre Channel, Native Ethernet connectivity and FCoE connectivity, however direct FCoE links are only supported to a maximum of 300 metres. Distances greater than 300 metres are only supported when using an FCIP link or Fibre Channel between source and target.


Direct Attachment

SAN boot is not supported on host Fibre Channel direct attach systems running v8.2.1.0 - v8.2.1.2. This configuration is supported with v8.2.1.3 or later.


Cisco Nexus

The minimum level of Cisco Nexus firmware supported for FCoE with the IBM Storwize V7000 Gen2 / Gen2+ is 5.2(1)N1(2a).


16Gbps Fibre Channel Canister Connection

Please visit the IBM System Storage Inter-operation Center (SSIC) for Fibre Channel configurations supported with 16Gbps node hardware.

Note 16Gbps Node hardware is supported when connected to Brocade and Cisco 8Gbps or 16Gbps fabrics only.

Direct connections to 2Gbps or 4Gbps SAN or direct host attachment to 2Gbps or 4Gbps ports is not supported.

Other configured switches which are not directly connected to the 16Gbps Node hardware can be any supported fabric switch as currently listed in SSIC.


25Gbps Ethernet Canister Connection

Two optional 2-port 25Gbps Ethernet adapter is supported in each node canister for iSCSI communication with iSCSI capable Ethernet ports in hosts via Ethernet switches. These 2-port 25Gbps Ethernet adapters do not support FCoE.

There are two types of 25Gbps Ethernet adapter feature supported:

  1. RDMA over Converged Ethernet (RoCE)
  2. Internet Wide-area RDMA Protocol (iWARP)

Either will work for standard iSCSI communications, i.e. not using Remote Direct Memory Access (RDMA). A future software release will add (RDMA) links using new protocols that support RDMA such as NVMe over Ethernet.

When use of RDMA with a 25Gbps Ethernet adapter becomes possible then RDMA links will only work between RoCE ports or between iWARP ports (i.e. from a RoCE node canister port to a RoCE port on a host or from an iWARP node canister port to an iWARP port on a host).

The 25Gbps adapters come with SFP28 fitted, which can be used to connect to switches using OM3 optical cables.

For Ethernet switches and adapters supported in hosts visit the SSIC.

This is an example of a RoCE adapter for use in a host.
https://docs.nvidia.com/networking/display/cx4lxen

This is an example of a iWARP adapter for use in a host.
https://www.chelsio.com/nic/unified-wire-adapters/t6225-cr/

Customers who want to connect a 10Gb switch to a 25Gb HBA should be aware that this is only supported via a SCORE request. Please contact your IBM representative to raise a SCORE request.


IP Partnership

IP partnerships are supported on any of the available ethernet ports. Using an Ethernet switch to convert a 25Gb to a 1Gb IP partnership, or a 10Gb to a 1Gb IP partnership is not supported. Therefore the IP infrastructure on both partnership sites must match. Bandwidth limiting on IP partnerships between both sites is supported.


Fabric Limitations

Only one FCF ( Fibre Channel Forwarder ) switch per fabric is supported.


VMware vSphere Virtual Volumes (VVols)

The maximum number of Virtual Machines on a single VMware ESXi host in a Storwize / VVol storage configuration is limited to 680.

The use of VMware vSphere Virtual Volumes (VVols) on a system that is configured for HyperSwap is not currently supported with SVC/Storwize.


Host Limitations

Cisco UCS
UCS firmware code level: 4.0(4)
UCS Platforms M3, M4 and M5

Protocol: FC (EHM)

  • UEFI SAN boot Is not supported with VIC13XX adapters
  • Path recovery via unified FC uplink port fails when the attached switch rebooted

Protocol: ISCSI

  • Connecting the Storwize Storage Array (either of the 25Gb HICs) to a UCS system must be done via breakout cables.

iSER
Operating systems not currently supported for use with iSER:

  • VMware ESXi 6.7 using Mellanox ConnectX-4 Lx EN
  • Windows 2012 R2 using Mellanox ConnectX-4 Lx EN
  • Windows 2016 using Mellanox ConnectX-4 Lx EN

FCoE
Operating systems not currently supported for use with FCoE:

  • RedHat 6.x
  • VMware 6.0
  • Windows 2012 Hyper-V Cluster

Microsoft Offload Data Transfer ( ODX ) and SDDDSM Requirements
Storwize V7000 version 7.5.0 introduced support for Microsoft ODX. In order to utilise this function all windows hosts accessing Storwize V7000 are required to be at a minimum SDDDSM version of 2.4.5.0. Earlier versions of SDDDSM are not supported when the ODX function is activated.

Windows NTP server 
The Linux NTP client used by Storwize V7000 may not always function correctly with Windows W32Time NTP Server


Oracle

Oracle Version and OS
Restrictions that apply:
Oracle Release 11.2 any platform
1
Oracle Release 12.1 any platform

Restriction 1:
Oracle ASM disk groups may dismount with the following error: 

"Waited 15 secs for write IO to PST"

Recommendation

Increase the asm_hbeatiowait to 120 seconds to prevent this issue occurring.

Applies to Oracle Database - Enterprise Edition - Version 11.2.0.3 to 12.1.0.1 [Release 11.2 to 12.1] on any platform


Priority Flow Control for iSCSI

Priority Flow Control for iSCSI is supported on Brocade VDX 10-gigabit Ethernet switches only.


Maximum Configurations

Configuration limits for Storwize V7000:

Property
Hardware Type
Maximum Number
Comments
System (Cluster) Properties 
Control enclosures per system (cluster)
4
Each control enclosure contains two node canisters
Nodes per system
8
Arranged as four I/O groups
Nodes per fabric
64
Maximum number of maximum number of SVC or Storwize family system nodes that can be present on the same Fibre Channel fabric, with visibility of each other
Fabrics per system
8
The number of counterpart SANs which are supported
Inter-cluster partnerships per system
3
A system may be partnered with up to three remote systems. No more than four systems may be in the same connected set
IP Quorum devices per system
5
Data encryption keys per system
1024
Node Properties 
Logins per node Fibre Channel WWPN
512
Includes logins from server HBAs, disk controller ports, node ports within the same system and node ports from remote systems
Fibre Channel buffer credits per port - 8Gbps FC adapter
255
The number of credits granted by the switch to the node
Fibre Channel buffer credits per port - 16Gbps FC adapter
4095
The number of credits granted by the switch to the node
iSCSI sessions per node
1024
2048 in IP failover mode (when partner node is unavailable).
This limit includes both iSCSI Host Attach AND iSCSI Initiator sessions
iSER sessions per node
256
Managed Disk Properties 
Managed disks (MDisks) per system
4096
The maximum number of logical units which can be managed by a system, including internal arrays.

Internal distributed arrays consume 16 logical units.

This number also includes external MDisks which have not been configured into storage pools (managed disk groups)
Managed disks per storage pool (managed disk group)
128
Storage pools per system
1024
Parent pools per system
128
Child pools per system
1023
Not supported in a Data Reduction Pool
Managed disk extent size
8192 MB
Capacity for an individual internal managed disk (array)
-
No limit is imposed beyond the maximum number of drives per array limits.
Maximum size is dependent on the extent size of the Storage Pool.
Comparison Table: Maximum Volume, MDisk and System capacity for each extent size.
Capacity for an individual external managed disk
1 PB
Note: External managed disks larger than 2 TB are only supported for certain types of storage systems. Refer to the supported hardware matrix for further details.
Maximum size is dependent on the extent size of the Storage Pool.
Comparison Table: Maximum Volume, MDisk and System capacity for each extent size.
Total storage capacity manageable per system
32 PB
Maximum requires an extent size of 8192 MB to be used

This limit represents the per system maximum of 2^22 extents.

Comparison Table: Maximum Volume, MDisk and System capacity for each extent size.
Data Reduction Pool Properties
Data Reduction Pools per system
4
Mdisks per Data Reduction Pool
128
Volumes per Data Reduction Pool
10000 - (Number of Data Reduction Pools x 12)
Extents per I/O group per Data Reduction Pool
128000
Volume (Virtual Disk) Properties 
Basic Volumes (VDisks) per system
10000
Each Basic Volume uses 1 VDisk, each with one copy.
HyperSwap volumes per system
1250
Each HyperSwap volume uses 4 VDisks, each with one copy, 1 active-active remote copy relationship and 4 FlashCopy mappings.
Volumes per I/O group
(volumes per caching I/O group)
10000
Thin-provisioned (space-efficient) volume copies in regular pools per system
-
No limit is imposed here beyond the volume copies per system limit.
Compressed volume copies in regular pools per system
2048
Maximum requires a system containing four control enclosures; refer to the Compressed volume copies in regular pools per I/O group limit below
Compressed volume copies in regular pools per I/O group
512
With 32GB Cache upgrade and 2nd Compression Accelerator card installed.
Compressed volume copies in data reduction pools per system
-
No limit is imposed here beyond the volume copy limit per data reduction pool
Compressed volume copies in data reduction pools per I/O group
-
No limit is imposed here beyond the volume copy limit per data reduction pool
Deduplicated volume copies in data reduction pools per system
-
No limit is imposed here beyond the volume copy limit per data reduction pool
Deduplicated volume copies in data reduction pools per I/O group
-
No limit is imposed here beyond the volume copy limit per data reduction pool
Volumes per storage pool
-
No limit is imposed beyond the volumes per system limit
Fully-allocated volume capacity
256 TB
Maximum size for an individual fully-allocated volume. 

Maximum size is dependent on the extent size of the Storage Pool.
Comparison Table: Maximum Volume, MDisk and System capacity for each extent size.
Thin-provisioned (space-efficient) per-volume capacity for volumes copies in regular and data reduction pools
256 TB
Maximum size for an individual thin-provisioned volume.

Maximum size is dependent on the extent size of the Storage Pool.
Comparison Table: Maximum Volume, MDisk and System capacity for each extent size.
Compressed volume capacity in regular pools Pools containing non-Flash storage
16 TB
Maximum size for an individual compressed volume.

See this Flash for further information on this limit.

Maximum size is dependent on the extent size of the Storage Pool.
Comparison Table: Maximum Volume, MDisk and System capacity for each extent size.
Pools containing all-Flash storage
32 TB
HyperSwap volume capacity in a single I/O group using RAID
850 TiB
This is due to the limit on bitmap space for mirroring and replication in each I/O group.
See the Knowledge Center for details.
Host mappings per system
64000
See also - volume mappings per host object below
Mirrored Volume (Virtual Disk) Properties 
Copies per volume
2
Volume copies per system
10000
Total mirrored volume capacity per I/O group
1024 TB
Generic Host Properties 
Host objects (IDs) per system
2048
A host object may contain both Fibre Channel ports and iSCSI names
Host objects (IDs) per I/O group
512
Refer to the additional Fibre Channel and iSCSI host limits below
Volume mappings per host object
2048
Although IBM Storwize V7000 allows the mapping of up to 2048 volumes per host object, not all hosts are capable of accessing/managing this number of volumes. The practical mapping limit is restricted by the host OS, not IBM Storwize V7000.
Note: this limit does not apply to hosts of type adminlun (used to support VMware vvols).
Total Fibre Channel ports and iSCSI names per system
8192
Total Fibre Channel ports and iSCSI names per I/O group
2048
Total Fibre Channel ports and iSCSI names per host object
32
iSCSI names per host object (ID)
8
Host Cluster Properties 
Host clusters per system
512
Hosts in a host cluster
128
Fibre Channel Host Properties (including hosts attached using FCoE) 
Fibre Channel hosts per system
2048
Fibre Channel host ports per system
8192
Fibre Channel hosts per I/O group
512
Fibre Channel host ports per I/O group
2048
Fibre Channel hosts ports per host object (ID)
32
Simultaneous I/Os per node FC port
8Gbps FC adapter
2048
16Gbps FC adapter
4096
iSCSI Host Properties 
iSCSI hosts per system
2048
iSCSI hosts per I/O group
512
iSCSI names per host object (ID)
8
iSCSI names per I/O group
512
iSCSI Hardware Properties 
10Gbps Ethernet adapters per canister
2
10Gbps Ethernet ports per canister
8
FCoE is supported on the first four 10GbE ports in the system
iSER Host Properties 
iSER hosts per system
2048
iSER hosts per I/O group
512
iSER names per host object (ID)
8
iSER Hardware Properties 
25Gbps iWARP adapters per canister
V7000 Gen 2
V7000 Gen 2+
2
V7000 Gen 3
3
25Gbps ROCE adapters per canister
V7000 Gen 2
V7000 Gen 2+
2
V7000 Gen 3
3
25Gbps iWARP ports per canister
V7000 Gen 2
V7000 Gen 2+
4
V7000 Gen 3
6
25Gbps ROCE ports per canister
V7000 Gen 2
V7000 Gen 2+
4
V7000 Gen 3
6
NVMe over Fibre Channel Host Properties 
FC-NVMe hosts per system
6
Up to 6 FC-NVMe hosts are supported per system when no SCSI (FC/iSCSI/SAS) hosts are attached.
These limits are not policed by the Spectrum Virtualize software. Any configurations that exceed these limits may experience significant adverse performance impact.
FC-NVMe hosts per I/O group
-
No limit is imposed beyond the per system limit when no SCSI (FC/iSCSI/SAS) hosts are attached.
FC-NVMe and SCSI host intermix
See notes
When FC-NVMe and SCSI hosts are attached to the same I/O group, the following restrictions apply:
  • Maximum NVMe hosts per I/O group: 1
  • Maximum SCSI hosts per I/O group: 5
  • The maximum FC-NVMe hosts per system limit (6) still applies.
These limits are not policed by the Spectrum Virtualize software. Any configurations that exceed these limits may experience significant adverse performance impact.
NVMe Qualified Names (NQNs) per host object (ID)
2
Copy Services Properties
Remote Copy (Metro Mirror and Global
Mirror) relationships per system
10000
This can be any mix of Metro Mirror and Global Mirror relationships.
Active-Active Relationships (HyperSwap) per system
1250
Remote Copy relationships per consistency group (<=256 GMCV relationships configured)
-
No limit is imposed beyond the Remote Copy relationships per system limit.

Refer to the Changes to support for Global Mirror with Change Volumes page for information relating to GMCV performance considerations and best practice.
Remote Copy relationships per consistency group (>256 GMCV relationships configured)
200
Remote Copy consistency
groups per system
256
Total Metro Mirror and Global Mirror volume capacity per I/O group
1024 TB
This limit is the total capacity for all master and auxiliary volumes in the I/O group.
Total number of Global Mirror with Change Volumes relationships per system
V7000 Gen 2
256
60s cycle time (Change volumes used for active-active relationships do not count toward this limit).
1500
300s cycle time (Change volumes used for active-active relationships do not count toward this limit).
V7000 Gen 2+
V7000 Gen 3
256
60s cycle time (Change volumes used for active-active relationships do not count toward this limit).
2500
300s cycle time (Change volumes used for active-active relationships do not count toward this limit).
FlashCopy mappings per system
5000
FlashCopy targets
per source
256
FlashCopy mappings
per consistency group
512
FlashCopy consistency
groups per system
500
Total FlashCopy volume capacity per I/O group
4096 TB
IP Partnership Properties 
Inter-cluster IP partnerships per system
1
A system may be partnered with up to three remote systems. A maximum of one of those can be IP and the other two FC.
I/O groups per system
2
The nodes from a maximum of two I/O groups per system can be used for IP partnership.
Inter site links per IP partnership
2
A maximum of two inter site links can be used between two IP partnership sites.
Ports per node
1
A maximum of one port per node can be used for IP partnership.
Internal Storage Properties 
SAS chains per control enclosure
2
Expansion enclosures per SAS chain
10
Expansion enclosures per control enclosure
20
Drives per I/O group
760
Drives per system
3040
Non-Distributed RAID Array Properties 
Arrays per system
128
Encrypted arrays per system
128
Drives per array
16
Min-Max member drives per RAID-0 array
1-8
Not supported by Gen 3
Min-Max member drives per RAID-1 array
2-2
Not supported by Gen 3
Min-Max member drives per RAID-5 array
3-16
Not supported by Gen 3
Min-Max member drives per RAID-6 array
5-16
Not supported by Gen 3
Min-Max member drives per RAID-10 array
2-16
Hot spare drives
-
No limit is imposed
Distributed RAID Array Properties 
Arrays per system
32
The presence of non-DRAID arrays will reduce this limit
Encrypted arrays per system
32
The presence of non-DRAID arrays will reduce this limit
Arrays per I/O group
10
The presence of non-DRAID arrays will reduce this limit
Drives per array
128
Min-Max member drives per RAID-5 array
4-128
Min-Max member drives per RAID-6 array
6-128
Rebuild areas per non-FCM array
1-4
Rebuild areas per FCM array
1
Min-Max stripe width for RAID-5 array
3-16
Min-Max stripe width for RAID-6 array
5-16
Max drive capacity for RAID-5 array
8 TB
External Storage System Properties 
Storage system WWNNs per system (cluster)
1024
Storage system WWPNs per system (cluster)
1024
WWNNs per storage system
16
WWPNs per WWNN
16
LUNs (managed disks) per storage system
-
No limit is imposed beyond the managed disks per system limit
System and User Management Properties 
User accounts per system
400
Includes the default user accounts
User groups per system
256
Includes the default user groups
Authentication servers per system
1
NTP servers per system
1
iSNS servers per system
1
Concurrent open SSH sessions per system
32
Event Notification Properties 
SNMP servers per system
6
Syslog servers per system
6
Email (SMTP) servers per system
6
Email servers are used in turn until the email is successfully sent
Email users (recipients) per system
12
LDAP servers per system
6
REST API Properties 
Threads per session
64
HTTP header size
16 KB
Objects per response
2000
 

Extents

The following table compares the maximum volume, MDisk and system capacity for each extent size.

Extent size (MB) 
Maximum non thin-provisioned volume capacity in GB
Maximum thin-provisioned volume capacity in GB (for regular pools) 
Maximum compressed volume size (for regular pools) ** 
Maximum thin-provisioned and compressed volume size in data reduction pools in GB
Maximum total thin-provisioned and compressed capacity for all volumes in a single data reduction pool per IOgroup in GB
Maximum MDisk capacity in GB 
Maximum DRAID Mdisk capacity in TB 
Total storage capacity manageable per system* 
16
2048 (2 TB)
2000
2TB
2048 (2 TB)
2048 (2 TB)
2048 (2 TB)
32
64 TB
32
4096 (4 TB)
4000
4TB
4096 (4 TB)
4096 (4 TB)
4096 (4 TB)
64
128 TB
64
8192 (8 TB)
8000
8TB
8192 (8 TB)
8192 (8 TB)
8192 (8 TB)
128
256 TB
128
16,384 (16 TB)
16,000
16TB
16,384 (16 TB)
16,384 (16 TB)
16,384 (16 TB)
256
512 TB
256
32,768 (32 TB)
32,000
32TB
32,768 (32 TB)
32,768 (32 TB)
32,768 (32 TB)
512
1 PB
512
65,536 (64 TB)
65,000
64TB
65,536 (64 TB)
65,536 (64 TB)
65,536 (64 TB)
1024 (1 PB)
2 PB
1024
131,072 (128 TB)
130,000
96TB ** 
131,072 (128 TB)
131,072 (128 TB)
131,072 (128 TB)
2048 (2 PB)
4 PB
2048
262,144 (256 TB)
260,000
96TB ** 
262,144 (256 TB)
262,144 (256 TB)
262,144 (256 TB)
4096 (4 PB)
8 PB
4096
262,144 (256 TB)
262,144
96TB ** 
262,144 (256 TB)
524,288 (512 TB)
524,288 (512 TB)
8192 (8 PB)
16 PB
8192
262,144 (256 TB)
262,144
96TB ** 
262,144 (256 TB)
1,048,576 (1024 TB)
1,048,576 (1024 TB)
16384 (16 PB)
32 PB

* The total capacity values assumes that all of the storage pools in the system use the same extent size.
** Please see the  following Flash

[{"Business Unit":{"code":"BU058","label":"IBM Infrastructure w\/TPS"},"Product":{"code":"ST3FR7","label":"IBM Storwize V7000"},"Component":"","Platform":[{"code":"PF025","label":"Platform Independent"}],"Version":"8.2.1","Edition":"","Line of Business":{"code":"LOB26","label":"Storage"}}]

Document Information

Modified date:
28 March 2023

UID

ibm10741421