IBM Support

V7.8.x Configuration Limits and Restrictions for IBM System Storage SAN Volume Controller

Preventive Service Planning


Abstract

This document lists the configuration limits and restrictions specific to SAN Volume Controller software version 7.8.x

Content

The use of WAN optimization devices such as Riverbed is not supported in native Ethernet IP partnership configurations containing SAN Volume Controller.
 
DRAID Strip Size
For candidate drives, with a capacity greater than 4TB, a strip size of 128 cannot be specified for either RAID-5 or RAID-6 DRAID arrays. For these drives a strip size of 256 should be used.
 
Transparent Cloud Tiering
Transparent cloud tiering on the system is defined by configuration limitations and rules. Please click the link for details
http://www.ibm.com/support/knowledgecenter/STPVGU_7.8.0/com.ibm.storage.svc.console.780.doc/svc_tctmaxlimitsconfig.html 

The following restrictions apply for Transparent Cloud Tiering:
a. When a cloud account is created, it must continue to use the same encryption type, throughout the life of the data in that cloud account - even if the cloud account object is removed and remade on the system, the encryption type for that cloud account may not be changed while back up data for that system exists in the cloud provider.

b. When performing re-key operations on a system that has an encryption enabled cloud account, perform the commit operation immediately after the prepare operation. Remember to retain the previous system master key (on USB or in Keyserver) as this key may still be needed to retrieve your cloud backup data when performing a T4 recovery or an import.

c. Import of data is not supported from systems in which the cloud account was created on a code level prior to 7.8.1.0

d. Customers using TCT at 7.8.0.x that want to perform the unusual command sequence of rmcloudacccount/mkcloudaccount using the same clusterid and container prefix should wait until they have upgraded to 7.8.1.0. Customers should perform the actions in e. below at 7.8.1.0 prior to performing any rmcloudaccount/mkcloudaccount sequence.

e. If you have configured TCT on your system and have created backup data in the cloud provider associated with your cloud account and you are upgrading from 7.8.0.x to 7.8.1.x:, then you should perform the following operations after an upgrade has completed:
  • svctask chsystem -name <temporary_name>
    svctask chsystem -name <original_name>
This will synchronise the content of the cloud provider and the system cloud account.

f. Restore_uid option should not be used when backup is imported to a new cluster.

g. Import of TCT data is only supported from systems whose backup data was created at 7.8.0.1.

h. Transparent cloud tiering uses Sig V2, when connecting to Amazon regions, and does not currently support regions that require Sig V4.
 

NPIV (N_Port ID Virtualization)
SAN Volume Controller and Storwize Version 7.7 introduced support for NPIV (N_Port ID Virtualization) for Fibre Channel fabric attachment. FCoE is not supported with NPIV. The following recommendations and restrictions should be followed when implementing the NPIV feature.

Operating systems not currently supported for use with NPIV:
RHEL6 and earlier on IBM Power
HPUX 11iV2
Veritas DMP multipathing on Windows with RAID-5 volumes in VxVM

General requirements

Required SDD versions for IBM AIX and Microsoft Windows Environments:

  1. IBM AIX Operating Systems require a minimum SDDPCM version of 2.6.8.0
  2. Microsoft Windows requires a minimum SDDDSM version 2.4.7.0. The latest recommended level which resolves issues listed below is 2.4.7.1.

Path Optimisation

User intervention may be required when changing NPIV states from "Transitional" to "Disabled". All Paths to a LUN with SDDDSM or SDDPCM may remain "Non-Optimized" when NPIV is "Disabled" from "Transitional" state. 
To resolve this issue please use the following instructions:

IBM AIX 
For SDDPCM: 
   Run "pcmpath chgprefercntl device <device number>/<device number range>" on AIX. This will restore both Optimized and Non-Optimized paths for all the LUNs correctly. 

Windows 2008 and 2012 
For SDDDSM: 
   Run "datapath rescanhw" on Windows. This will restore both Optimized and Non-Optimized paths for all the LUNs correctly. This issue is resolved with SDDDSM version 2.4.7.1 
  
Windows 2008 and 2012 Non-Preferred Paths with SDDDSM 
When NPIV enters into Transitional state from Disabled, with all the SDDDSM paths in Non-Preferred state, the paths to the Virtual ports also become Non-Preferred. This path configuration might cause IO failures as soon as NPIV moves into "Enabled" state. 
As a work around user should configure "at least one preferred path" to each LUN, when in NPIV "Disabled" state. This issue is resolved with SDDDSM version 2.4.7.1 

Solaris 
Emulex HBA Settings:
1. When implementing NPIV on Solaris 11 the default disk IO timeout needs to be changed to 120s by adding "set sd:sd_io_time=120" in /etc/system file, A system reboot is required for the change to be implemented. 
2. When ports on host HBA are connected to 16GB SAN, NPIV is not supported.

Other Operating Systems
Other operating Systems may also experience the same issue when changing the NPIV state from "Transitional" to "Disabled" in which case the operating system specific rescan command should be used. 

Fabric Attachment

NPIV mode on SVC or Storwize is only supported when used with Brocade or Cisco fibre channel SAN switches which are NPIV capable.


Nodes in an IO group cannot be replaced by nodes with less memory when compressed volumes are present

If a customer must migrate from 64GB to 32GB memory node canisters in an IO group, they will have to remove all compressed volume copies in that IO group. This restriction applies to 7.7.0.0 and newer software.

A customer must not:

  1. Create an IO group with node canisters with 64GB of memory.
  2. Create compressed volumes in that IO group.
  3. Delete both node canisters from the system with CLI or GUI.
  4. Install new node canisters with 32GB of memory and add them to the configuration in the original IO group with CLI or GUI.

HyperSwap
When using the HyperSwap function with software version 7.8.0.0 and higher, please configure your host multipath driver to use an ALUA-based path policy. 

Due to the requirement for multiple access IO groups, SAS attached host types are not supported with HyperSwap volumes. 

A volume configured with multiple access I/O groups, on a system in the storage layer, cannot be virtualized by a system in the replication layer.  This restriction prevents a HyperSwap volume on one system being virtualized by another.

AIX Live Partition Mobility (LPM) 
AIX LPM is supported with the HyperSwap function and AIX 7.x


Clustered Systems
A SAN Volume Controller system at version 7.8.0.0 and higher requires native Fibre Channel SAN or alternatively 8Gbps/16Gbps Direct Attach Fibre Channel connectivity for communication between all nodes in the local cluster. Fibre Channel over Ethernet (FCoE) connectivity for communication between all nodes in the local cluster is also supported.

Partnerships between systems for Metro Mirror or Global Mirror replication can be used with both Fibre Channel, native Ethernet connectivity and FCoE connectivity, however direct FCoE links are only supported to a maximum of 300 metres. Distances greater than 300 metres are only supported when using an FCIP link or Fibre Channel between source and target.


Direct Attachment

For information on support configurations using direct attachment please visit the following document
Direct Attachment of Storwize and SAN Volume Controller Systems


Cisco Nexus

The minimum level of Cisco Nexus firmware supported for FCoE with the IBM 2145-DH8 / 2145-SV1 is 5.2(1)N1(2a).


16Gbps Fibre Channel Node Connection
Please see IBM System Storage Inter-operation Center (SSIC) - https://www.ibm.com/systems/support/storage/ssic/interoperability.wss ) for supported 16Gbps Fibre Channel configurations supported with 16Gbps node hardware.

Note 16Gbps Node hardware is supported when connected to Brocade and Cisco 8Gbps or 16Gbps fabrics only.

Direct connections to 2Gbps or 4Gbps SAN or direct host attachment to 2Gbps or 4Gbps ports is not supported.

Other configured switches which are not directly connected to the 16Gbps Node hardware can be any supported fabric switch as currently listed in SSIC.


IP Partnership

Using an Ethernet Switch to convert a 10Gbps IP partnership link to 1Gbps link and vice versa is not supported. Therefore, the IP infrastructure on the two partnership sites should both be 1Gbps or 10Gbps. However, bandwidth limiting on 10Gbps and 1Gbps IP partnership between sites is supported.


Fabric Limitations

Only one FCF (Fibre Channel Forwarder) switch per fabric is supported.


VMware vSphere Virtual Volumes (VVols)

The maximum number of Virtual Machines on a single VMware ESXi host in an SVC / VVol storage configuration is limited to 680.

The use of VMware vSphere Virtual Volumes (VVols) on a system that is configured for HyperSwap is not currently supported with SVC/Storwize.


DS4000 Maintenance

SAN Volume Controller supports concurrent ESM firmware upgrades for those DS4000 models listed as such on the Supported Hardware List when they are running either 06.23.05.00 or later controller firmware. However, controllers running firmware levels earlier than 06.23.05.00 will not be supported for concurrent ESM upgrade37.88cms. Customers in this situation, who want to gain support for concurrent ESM upgrades, will need to first upgrade the DS4000 controller firmware level to 06.23.05.00. This action is a controller firmware upgrade, not an ESM upgrade and concurrent controller firmware upgrades are already supported in conjunction with SAN Volume Controller. Once the controller firmware is at 06.23.05.00 or later the ESM firmware can be upgraded concurrently.

Note: The DS4000 ESM firmware upgrade must be done on one disk expansion enclosure at a time. A 10 minute delay from when one enclosure is upgraded to the start of the upgrade of another enclosure is required. Confirm via the Storage Manager applications "Recovery Guru" that the DS4000 status is in an optimal state before upgrading the next enclosure. If it is not, then do not continue ESM firmware upgrades until the problem is resolved.


Host Limitations

Microsoft Offload Data Transfer (ODX) and SDDDSM Requirements
SAN Volume Controller 7.5.0 introduced support for Microsoft ODX. In order to use this function all windows hosts accessing SAN Volume Controller are required to be at a minimum SDDDSM version of 2.4.5.0. Earlier versions of SDDDSM are not supported when the ODX function is activated.

Non-Disruptive Volume Move (NDVM)
Windows 2008 with SDDDSM - Stale paths will be left after moving a volume to a new I/O group. A host reboot is required in order to remove the stale paths.
AIX 6.1 SDD PCM - Preferred paths are not detected after moving a volume to a new I/O group.

Windows NTP server
The Linux NTP client used by SAN Volume Controller may not always function correctly with Windows W32Time NTP Server


Oracle
 
Oracle Version and OS
Restrictions that apply:
Oracle Release 11.2 any platform
1
Oracle Release 12.1 any platform
Restriction 1:
Oracle ASM disk groups may dismount with the following error

"Waited 15 secs for write IO to PST"

Recommendation

Increase the asm_hbeatiowait to 120 seconds to prevent this issue occurring.

Applies to Oracle Database - Enterprise Edition - Version 11.2.0.3 to 12.1.0.1 [Release 11.2 to 12.1] on any platform


Priority Flow Control for iSCSI

Priority Flow Control for iSCSI is supported on Brocade VDX 10-gigabit Ethernet switches only.


Maximum Configurations

Configuration limits for SAN Volume Controller:
 
Property
Hardware Type
Maximum Number
Comments
System (Cluster) Properties
Nodes per system (cluster)
All
8
Arranged as four I/O groups
Nodes per fabric
All
64
Maximum number of maximum number of SVC or Storwize family system nodes that can be present on the same Fibre Channel fabric, with visibility of each other
I/O groups per system
All
4
Each containing two nodes
Fabrics per system
2145-CF8
6
The number of counterpart SANs which are supported
- Up to 4 fabrics using native Fibre Channel ports
- Up to 2 fabrics using FCoE ports
Fabrics per system
2145-CG8
8
The number of counterpart SANs which are supported
Fabrics per system
2145-DH8
2145-SV1
12
The number of counterpart SANs which are supported
USB ports
2145-CF8
2145-CG8
4
2145-DH8
2145-SV1
6
Inter-cluster partnerships per system
All
3
(maximum 1 IP partnership is supported)
A system may be partnered with up to three remote systems. No more than four systems may be in the same connected set
IP Quorum devices per system
All
5
Data encryption keys per system
2145-DH8
2145-SV1
1024
Node Properties
Logins per node Fibre Channel WWPN
512
Includes logins from server HBAs, disk controller ports, node ports within the same system and node ports from remote systems
Fibre Channel buffer credits per port
8Gbps FC adapter
255
The number of credits granted by the switch to the node
16Gbps FC adapter
4095
iSCSI sessions per node
1024
A maximum of 256 can be backend sessions
Managed Disk Properties
Managed disks (MDisks) per system
4096
The maximum number of logical units which can be managed by a cluster.

Internal distributed arrays consume 16 logical units.

This number includes external MDisks which have not been configured into storage pools (managed disk groups)
Managed disks per storage pool (managed disk group)
128
Storage pools per system
1024
Parent pools per system
128
Child pools per system
1023
Managed disk extent size
8192 MB
Capacity for an individual internal managed disk (array)
-
No limit is imposed beyond the maximum number of drives per array limits.
Maximum size is dependent on the extent size of the Storage Pool.
Comparison Table: Maximum Volume, MDisk and System capacity for each extent size.
Capacity for an individual external managed disk
1 PB
Note: External managed disks larger than 2 TB are only supported for certain types of storage systems. Refer to the supported hardware matrix for further details.
Maximum size is dependent on the extent size of the Storage Pool.
Comparison Table: Maximum Volume, MDisk and System capacity for each extent size.
Total storage capacity manageable per system
32 PB
Maximum requires an extent size of 8192 MB to be used

This limit represents the per system maximum of 2^22 extents.

Maximum size is dependent on the extent size of the Storage Pool.
Comparison Table: Maximum Volume, MDisk and System capacity for each extent size.
Volume (Virtual Disk) Properties
Basic volumes (VDisks) per system
10000
Each basic volume uses 1 VDisk, each with one copy.
Stretched volumes per system
5000
Each stretched volume uses 1 VDisk, each with two copies.
HyperSwap volumes per system
1250
Each HyperSwap volume uses 4 VDisks, each with one copy, 1 active-active remote copy relationship and 4 FlashCopy mappings.
Volumes per I/O group
(volumes per caching I/O group)
10000
Thin-provisioned (space-efficient) volume copies per system
-
No limit is imposed here beyond the volume copies per system limit.
Compressed volume copies per system
2145-CF8
2145-CG8
800
Maximum requires an 8-node cluster
Compressed volume copies per I/O group
2145-CF8
2145-CG8
200
Compressed volume copies per system
2145-DH8
2145-SV1
2048
Maximum requires an 8-node cluster
Compressed volume copies per I/O group
2145-DH8
2145-SV1
512
With 64GB Memory Option Only
Volumes per storage pool
-
No limit is imposed beyond the volumes per system limit
Fully-allocated volume capacity
256 TB
Maximum size for an individual fully-allocated volume.

Maximum size is dependent on the extent size of the Storage Pool.
Comparison Table: Maximum Volume, MDisk and System capacity for each extent size.
Thin-provisioned (space-efficient) volume capacity
256 TB
Maximum size for an individual thin-provisioned volume

Maximum size is dependent on the extent size of the Storage Pool.
Comparison Table: Maximum Volume, MDisk and System capacity for each extent size.
Compressed volume capacity Pools containing non-Flash storage
16 TB
Maximum size for an individual compressed volume.
See this Flash for further information on this limit.

Maximum size is dependent on the extent size of the Storage Pool.
Comparison Table: Maximum Volume, MDisk and System capacity for each extent size.
Pools containing all-Flash storage
32 TB
Host mappings per system
20,000
See also - volume mappings per host object below
Mirrored Volume (Virtual Disk) Properties
Copies per volume
2
Volume copies per system
10000
The maximum number of volumes cannot all have the maximum number of copies
Total mirrored volume capacity per I/O group
1024 TB
Generic Host Properties
Host objects (IDs) per system
2145-CF8
2145-CG8
2145-DH8
2145-SV1
2048
A host object may contain both Fibre Channel ports and iSCSI names
Host objects (IDs) per I/O group
2145-CF8
2145-CG8
2145-DH8
2145-SV1
512
Refer to the additional Fibre Channel and iSCSI host limits below
Volume mappings per host object
2145-CF8
2145-CG8
2145-DH8
2145-SV1
2048*
* Although SVC allows the mapping of up to 2048 volumes per host object, not all hosts are capable of accessing/managing this number of volumes. The practical mapping limit is restricted by the host OS, not SVC.
Note: this limit does not apply to hosts of type adminlun (used to support VMware vvols).
Total Fibre Channel ports and iSCSI names per system
8192
Total Fibre Channel ports and iSCSI names per I/O group
2048
Total Fibre Channel ports and iSCSI names per host object
32
iSCSI names per host object (ID)
8
Host Cluster Properties
Host clusters per system
512
Hosts in a host cluster
128
Fibre Channel Host Properties (including hosts attached using FCoE)
Fibre Channel hosts per system
2048
Fibre Channel host ports per system
8192
Fibre Channel hosts per I/O group
512
Fibre Channel host ports per I/O group
2048
Fibre Channel hosts ports per host object (ID)
32
Simultaneous I/Os per node FC port
8Gbps FC adapter
2048
16Gbps FC adapter
4096
iSCSI Host Properties
iSCSI hosts per system
2048
iSCSI hosts per I/O group
512
iSCSI names per host object
8
iSCSI names per I/O group
512
iSCSI (SCSI 3) registrations per VDisk
512
Copy Services Properties
Remote Copy (Metro Mirror and Global
Mirror) relationships per
system
10000
This can be any mix of Metro Mirror and Global Mirror relationships.
Active-Active Relationships (HyperSwap)
1250
Remote Copy relationships per consistency group
-
No limit is imposed beyond the Remote Copy relationships per system limit.

Refer to the Changes to support for Global Mirror with Change Volumes page for information relating to GMCV performance considerations and best practice.
Remote Copy consistency
groups per system
256
Total Metro Mirror and Global Mirror volume capacity per I/O group
1024 TB
This limit is the total capacity for all master and auxiliary volumes in the I/O group.
Total number of Global Mirror with Change Volumes relationships per system
2145-CF8
2145-CG8
256
Maximum of 256 relationships per consistency group.
Change volumes used for active-active relationships do not count towards this limit.
2145-DH8
1500
2145-SV1
2500
FlashCopy mappings per system
5000
FlashCopy targets
per source
256
FlashCopy mappings
per consistency group
512
FlashCopy consistency groups per system
500
Total FlashCopy volume capacity per I/O group
4096 TB
IP Partnership Properties
Inter-cluster IP partnerships per system
1
A system may be partnered with up to three remote systems. A maximum of one of those can be IP and the other two FC.
I/O groups per system
2
The nodes from a maximum of two I/O groups per system can be used for IP partnership.
Inter-site links per IP partnership
2
A maximum of two inter-site links can be used between two IP partnership sites.
Ports per node
1
A maximum of one port per node can be used for IP partnership.
IP partnership Software Compression Limit
2145-CF8
2145-CG8
70 MB/s
2145-DH8
2145-SV1
140 MB/s
Internal Storage System Properties
Drives per node
2145-CF8
2145-CG8
4
Max expansion enclosures per I/O group
2145-DH8
2145-SV1
20
The limiting factor is the combined chain weight of the various components. The maximum SAS chain weight that can be attached to a node SAS port is 10:
2145-92F enclosures have a chain weight of 2.5;
2145-24F and 2145-12F enclosures have a chain weight of 1.
Max drives per enclosure
2145-DH8
2145-SV1
92
Max drives per I/O group
2145-DH8
2145-SV1
736
Max drives per system
2145-DH8
2145-SV1
2944
Non-Distributed RAID Array Properties
Min-Max member drives per RAID-0 array
2145-CF8
2145-CG8
1-4
All drives in a RAID-0 array must be located in the same node
2145-DH8
2145-SV1
1-8
Min-Max member drives per RAID-1 array
2145-CF8
2145-CG8
2-2
The pair of drives must contain one drive from one node in the I/O group and one drive from the other node in the same I/O group
2145-DH8
2145-SV1
Min-Max member drives per RAID-5 array
2145-DH8
2145-SV1
3-16
Min-Max member drives per RAID-6 array
2145-DH8
2145-SV1
5-16
Min-Max member drives per RAID-10 array
2145-CF8
2145-CG8
2-8
The drives are specified as a sequence of drive pairs. Each pair of drives must contain one drive from a node in the I/O group and a drive from the other node in the same I/O group
2145-DH8
2145-SV1
2-16
Distributed RAID Array Properties
Arrays per system
2145-DH8
2145-SV1
32
The presence of non-DRAID arrays will reduce this limit
Encrypted arrays per system
2145-DH8
2145-SV1
32
The presence of non-DRAID arrays will reduce this limit
Arrays per I/O group
2145-DH8
2145-SV1
10
The presence of non-DRAID arrays will reduce this limit
Drives per array
2145-DH8
2145-SV1
128
Min-Max member drives per RAID-5 array
2145-DH8
2145-SV1
4-128
Min-Max member drives per RAID-6 array
2145-DH8
2145-SV1
6-128
Rebuild areas per array
2145-DH8
2145-SV1
1-4
Min-Max stripe width for RAID-5 array
2145-DH8
2145-SV1
3-16
Min-Max stripe width for RAID-6 array
2145-DH8
2145-SV1
5-16
External Storage System Properties
Storage system WWNNs per system (cluster)
1024
Storage system WWPNs per system (cluster)
1024
WWNNs per storage system
16
WWPNs per WWNN
16
LUNs (managed disks) per storage system
-
No limit is imposed beyond the managed disks per system (cluster) limit
System and User Management Properties
User accounts per system
400
Includes the default user accounts
User groups per system
256
Includes the default user groups
Authentication servers per system
1
NTP servers per system
1
iSNS servers per system
1
Concurrent open SSH sessions per system
32
Event Notification Properties
SNMP servers per system
6
Syslog servers per system
6
Email (SMTP) servers per system
6
Email servers are used in turn until the email is successfully sent
Email users (recipients) per system
12
LDAP servers per system
6
2145-DH8 / 2145-SV1 Options
8Gbps Fibre channel cards per node
4
16Gbps Fibre channel cards per node
4
Compression accelerator cards per node
2
10GE Ethernet cards per node
1
10Gbps Ethernet ports per node
4
1Gbps Ethernet ports per node
3

 

Extents 

The following table compares the maximum volume, MDisk and system capacity for each extent size.

Extent size (MB) 
Maximum non thin-provisioned volume capacity in GB
Maximum thin-provisioned volume capacity in GB 
Maximum compressed volume size** 
Maximum MDisk capacity in GB 
Maximum DRAID Mdisk capacity in TB 
Total storage capacity manageable per system* 
16
2048 (2 TB)
2000
2048 (2 TB)
32
64 TB
32
4096 (4 TB)
4000
4096 (4 TB)
64
128 TB
64
8192 (8 TB)
8000
8192 (8 TB)
128
256 TB
128
16,384 (16 TB)
16,000
16,384 (16 TB)
256
512 TB
256
32,768 (32 TB)
32,000
32,768 (32 TB)
512
1 PB
512
65,536 (64 TB)
65,000
65,536 (64 TB)
1024 (1 PB)
2 PB
1024
131,072 (128 TB)
130,000
96TB ** 
131,072 (128 TB)
2048 (2 PB)
4 PB
2048
262,144 (256 TB)
260,000
96TB ** 
262,144 (256 TB)
4096 (4 PB)
8 PB
4096
262,144 (256 TB)
262,144
96TB ** 
524,288 (512 TB)
8192 (8 PB)
16 PB
8192
262,144 (256 TB)
262,144
96TB ** 
1,048,576 (1024 TB)
16384 (16 PB)
32 PB


* The total capacity values assumes that all of the storage pools in the system use the same extent size. 
** Please see the  following Flash

[{"Product":{"code":"STPVGU","label":"SAN Volume Controller"},"Business Unit":{"code":"BU058","label":"IBM Infrastructure w\/TPS"},"Component":"7.8","Platform":[{"code":"","label":"SAN Volume Controller"}],"Version":"7.8","Edition":"","Line of Business":{"code":"LOB26","label":"Storage"}}]

Document Information

Modified date:
30 June 2021

UID

ssg1S1009560