IBM Tivoli Storage Manager, Version 7.1

DEFINE STGPOOL (Define a primary storage pool assigned to sequential access devices)

Use this command to define a primary storage pool that is assigned to sequential access devices.

Privilege class

To issue this command, you must have system privilege.

Syntax

Read syntax diagramSkip visual syntax diagram
>>-DEFine STGpool--pool_name--device_class_name----------------->

   .-POoltype--=--PRimary-.                                    
>--+----------------------+--+-----------------------------+---->
   '-POoltype--=--PRimary-'  '-DESCription--=--description-'   

   .-ACCess--=--READWrite-------.   
>--+----------------------------+------------------------------->
   '-ACCess--=--+-READWrite---+-'   
                +-READOnly----+     
                '-UNAVailable-'     

   .-MAXSIze--=--NOLimit-------------------.   
>--+---------------------------------------+-------------------->
   |                               (1) (2) |   
   '-MAXSIze--=--maximum_file_size---------'   

   .-CRCData--=--No---------.   
>--+------------------------+----------------------------------->
   '-CRCData--=--+-Yes----+-'   
                 |    (1) |     
                 '-No-----'     

>--+-----------------------------------+------------------------>
   |                           (1) (2) |   
   '-NEXTstgpool--=--pool_name---------'   

   .-HIghmig--=--90--------------.   
>--+-----------------------------+------------------------------>
   |                     (1) (2) |   
   '-HIghmig--=--percent---------'   

   .-LOwmig--=--70--------------.   
>--+----------------------------+------------------------------->
   |                    (1) (2) |   
   '-LOwmig--=--percent---------'   

   .-REClaim--=--60--------------.   
>--+-----------------------------+------------------------------>
   |                     (1) (2) |   
   '-REClaim--=--percent---------'   

   .-RECLAIMPRocess--=--1--------------.   
>--+-----------------------------------+------------------------>
   |                           (1) (2) |   
   '-RECLAIMPRocess--=--number---------'   

>--+--------------------------------------+--------------------->
   |                              (1) (2) |   
   '-RECLAIMSTGpool--=--pool_name---------'   

   .-RECLAMATIONType--=--THRESHold-----------------.   
>--+-----------------------------------------------+------------>
   |                                   (1) (2) (3) |   
   '-RECLAMATIONType--=--+-THRESHold-+-------------'   
                         '-SNAPlock--'                 

   .-COLlocate--=--GRoup-------------.   
>--+---------------------------------+-------------------------->
   |                             (2) |   
   '-COLlocate--=--+-No--------+-----'   
                   +-GRoup-----+         
                   +-NODe------+         
                   '-FIlespace-'         

                         (2)  .-REUsedelay--=--0--------.   
>--MAXSCRatch--=--number------+-------------------------+------->
                              |                     (2) |   
                              '-REUsedelay--=--days-----'   

>--+----------------------------------+------------------------->
   |                          (1) (2) |   
   '-OVFLOcation--=--location---------'   

   .-MIGDelay--=--0------------.   
>--+---------------------------+-------------------------------->
   |                   (1) (2) |   
   '-MIGDelay--=--days---------'   

   .-MIGContinue--=--Yes-------------.   
>--+---------------------------------+-------------------------->
   |                         (1) (2) |   
   '-MIGContinue--=--+-No--+---------'   
                     '-Yes-'             

   .-MIGPRocess--=--1--------------.   
>--+-------------------------------+---------------------------->
   |                       (1) (2) |   
   '-MIGPRocess--=--number---------'   

   .-DATAFormat--=--NATive------------------.   
>--+----------------------------------------+------------------->
   |                                (2) (4) |   
   '-DATAFormat--=--+-NATive------+---------'   
                    +-NONblock----+             
                    +-NETAPPDump--+             
                    +-CELERRADump-+             
                    '-NDMPDump----'             

   .-AUTOCopy--=--CLient--------.   
>--+----------------------------+------------------------------->
   '-AUTOCopy--=--+-None------+-'   
                  +-CLient----+     
                  +-MIGRation-+     
                  '-All-------'     

>--+---------------------------------------------+-------------->
   |                  .-,----------------------. |   
   |                  V                (1) (2) | |   
   '-COPYSTGpools--=----copy_pool_name---------+-'   

   .-COPYContinue--=--Yes-------------.   
>--+----------------------------------+------------------------->
   |                          (1) (2) |   
   '-COPYContinue--=--+-Yes-+---------'   
                      '-No--'             

>--+-----------------------------------------------+------------>
   |                     .-,---------------------. |   
   |                     V                       | |   
   '-ACTIVEDATApools--=----active-data_pool_name-+-'   

   .-DEDUPlicate--=--No----------.   
>--+-----------------------------+------------------------------>
   '-DEDUPlicate--=--+-No------+-'   
                     |     (5) |     
                     '-Yes-----'     

   .-IDENTIFYPRocess--=--1----------.   
>--+--------------------------------+--------------------------><
   |                            (6) |   
   '-IDENTIFYPRocess--=--number-----'   

Notes:
  1. This parameter is not available for storage pools that use the data formats NETAPPDUMP, CELERRADUMP, or NDMPDUMP.
  2. This parameter is not available or is ignored for Centera storage pools.
  3. The RECLAMATIONTYPE=SNAPLOCK setting is valid only for storage pools that are defined to servers that are enabled for System Storage® Archive Manager. The storage pool must be assigned to a FILE device class, and the directories that are specified in the device class must be NetApp SnapLock volumes.
  4. The values NETAPPDUMP, CELERRADUMP, and NDMPDUMP are not valid for storage pools that are defined with a FILE-type device class.
  5. This parameter is valid only for storage pools that are defined with a FILE-type device class.
  6. This parameter is available only when the value of the DEDUPLICATE parameter is YES.

Parameters

pool_name (Required)
Specifies the name of the storage pool to be defined. The name must be unique, and the maximum length is 30 characters.
device_class_name (Required)
Specifies the name of the device class to which this storage pool is assigned. You can specify any device class except for the DISK device class.
POoltype=PRimary
Specifies that you want to define a primary storage pool. This parameter is optional. The default value is PRIMARY.
DESCription
Specifies a description of the storage pool. This parameter is optional. The maximum length of the description is 255 characters. Enclose the description in quotation marks if it contains any blank characters.
ACCess
Specifies how client nodes and server processes (such as migration and reclamation) can access files in the storage pool. This parameter is optional. The default value is READWRITE. Possible values are:
READWrite
Specifies that client nodes and server processes can read and write to files stored on volumes in the storage pool.
READOnly
Specifies that client nodes can only read files from the volumes in the storage pool.

Server processes can move files within the volumes in the storage pool. However, no new writes are permitted to volumes in the storage pool from volumes outside the storage pool.

If this storage pool has been specified as a subordinate storage pool (with the NEXTSTGPOOL parameter) and is defined as readonly, the storage pool is skipped when server processes attempt to write files to the storage pool.

UNAVailable
Specifies that client nodes cannot access files stored on volumes in the storage pool.

Server processes can move files within the volumes in the storage pool and can also move or copy files from this storage pool to another storage pool. However, no new writes are permitted to volumes in the storage pool from volumes outside the storage pool.

If this storage pool has been specified as a subordinate storage pool (with the NEXTSTGPOOL parameter) and is defined as unavailable, the storage pool is skipped when server processes attempt to write files to the storage pool.

MAXSIze
Specifies the maximum size for a physical file that the server can store in the storage pool. This parameter is optional. The default value is NOLIMIT. Possible values are:
NOLimit
Specifies that there is no maximum size limit for physical files stored in the storage pool.
maximum_file_size
Limits the maximum physical file size. Specify an integer from 1 to 999999, followed by a scale factor. For example, MAXSIZE=5G specifies that the maximum file size for this storage pool is 5 gigabytes. Scale factors are:
Scale factor Meaning
K kilobyte
M megabyte
G gigabyte
T terabyte

If a file exceeds the maximum size and no pool is specified as the next storage pool in the hierarchy, the server does not store the file. If a file exceeds the maximum size and a pool is specified as the next storage pool, the server stores the file in the next storage pool that can accept the file size. If you specify the next storage pool parameter, at least one storage pool in your hierarchy should have no limit on the maximum size of a file. By having no limit on the size for at least one pool, you ensure that no matter what its size, the server can store the file.

For logical files that are part of an aggregate, the server considers the size of the aggregate to be the file size. Therefore, the server does not store logical files that are smaller than the maximum size limit if the files are part of an aggregate that is larger than the maximum size limit.

Restriction: This parameter is not available for storage pools that use the following data formats:
  • NETAPPDUMP
  • CELERRADUMP
  • NDMPDUMP
CRCData
Specifies whether a cyclic redundancy check (CRC) validates storage pool data when audit volume processing occurs on the server. This parameter is only valid for NATIVE data format storage pools. This parameter is optional. The default value is NO. By setting CRCDATA to YES and scheduling an AUDIT VOLUME command you can continually ensure the integrity of data that is stored in your storage hierarchy. Possible values are:
Yes
Specifies that data is stored containing CRC information, allowing for audit volume processing to validate storage pool data. This mode impacts performance because more overhead is required to calculate and compare CRC values between the storage pool and the server.
No
Specifies that data is stored without CRC information.
Restriction: This parameter is not available for storage pools that use the following data formats:
  • NETAPPDUMP
  • CELERRADUMP
  • NDMPDUMP
Tip: For storage pools that are associated with the 3592, LTO, or ECARTRIDGE device type, logical block protection provides better protection against data corruption than CRC validation for a storage pool. If you specify CRC validation for a storage pool, data is validated only during volume auditing operations. Errors are identified after data is written to tape.
To enable logical block protection, specify a value of READWRITE for the LBPROTECT parameter on the DEFINE DEVCLASS and UPDATE DEVCLASS commands for the 3592, LTO, or ECARTRIDGE device types. Logical block protection is supported only on the following types of drives and media:
  • IBM® LTO5 and later.
  • IBM 3592 Generation 3 drives and later with 3592 Generation 2 media and later.
  • Oracle StorageTek T10000C drives.
NEXTstgpool
Specifies a primary storage pool to which files are migrated. You cannot migrate data from a sequential access storage pool to a random access storage pool. This parameter is optional.

If this storage pool does not have a next storage pool, the server cannot migrate files from this storage pool and cannot store files that exceed the maximum size for this storage pool in another storage pool.

When there is insufficient space available in the current storage pool, the NEXTSTGPOOL parameter for sequential access storage pools does not allow data to be stored into the next pool. In this case, the server issues a message and the transaction fails.

For next storage pools with a device type of FILE, the server completes a preliminary check to determine whether sufficient space is available. If space is not available, the server skips to the next storage pool in the hierarchy. If space is available, the server attempts to store data in that pool. However, it is possible that the storage operation might fail because, at the time the actual storage operation is attempted, the space is no longer available.

You cannot create a chain of storage pools that leads to an endless loop through the NEXTSTGPOOL parameter. At least one storage pool in the hierarchy must have no value specified for NEXTSTGPOOL.

Restriction: This parameter is not available for storage pools that use the following data formats:
  • NETAPPDUMP
  • CELERRADUMP
  • NDMPDUMP

If you specify a sequential access pool as the NEXTSTGPOOL, the pool can be only NATIVE or NONBLOCK data format.

HIghmig
Specifies that the server starts migration when storage pool utilization reaches this percentage. For sequential-access disk (FILE) storage pools, utilization is the ratio of data in a storage pool to the pool's total estimated data capacity, including the capacity of all scratch volumes specified for the pool. For storage pools that use tape media, utilization is the ratio of volumes that contain data to the total number of volumes in the storage pool. The total number of volumes includes the maximum number of scratch volumes. This parameter is optional. You can specify an integer from 0 to 100. The default value is 90.

When the storage pool exceeds the high migration threshold, the server can start migration of files by volume to the next storage pool defined for the pool. You can set the high migration threshold to 100 to prevent migration for the storage pool.

Restriction: This parameter is not available for storage pools that use the following data formats:
  • NETAPPDUMP
  • CELERRADUMP
  • NDMPDUMP
LOwmig
Specifies that the server stops migration when storage pool utilization is at or below this percentage. For sequential-access disk (FILE) storage pools, utilization is the ratio of data in a storage pool to the pool's total estimated data capacity, including the capacity of all scratch volumes specified for the pool. For storage pools that use tape media, utilization is the ratio of volumes that contain data to the total number of volumes in the storage pool. The total number of volumes includes the maximum number of scratch volumes. This parameter is optional. You can specify an integer 0 - 99. The default value is 70.

When the storage pool reaches the low migration threshold, the server does not start migration of files from another volume. You can set the low migration threshold to 0 to permit migration to empty the storage pool.

Restriction: This parameter is not available for storage pools that use the following data formats:
  • NETAPPDUMP
  • CELERRADUMP
  • NDMPDUMP
REClaim
Specifies when the server reclaims a volume, which is based on the percentage of reclaimable space on a volume. Reclaimable space is the amount of space that is occupied by files that are expired or deleted from the Tivoli® Storage Manager database.

Reclamation makes the fragmented space on volumes usable again by moving any remaining unexpired files from one volume to another volume, thus making the original volume available for reuse. This parameter is optional. You can specify an integer from 1 to 100. The default value is 60, except for storage pools that use WORM devices.

AIX operating systems Sun Solaris operating systems Windows operating systems For storage pools that use WORM devices, the default value is 100 to prevent reclamation from occurring. This is the default because a WORM volume is not reusable. If necessary, you can lower the value to allow the server to consolidate data onto fewer volumes. Volumes that are emptied by reclamation can be checked out of the library, freeing slots for new volumes.

When determining which volumes in a storage pool to reclaim, the Tivoli Storage Manager server first determines the reclamation threshold indicated by the RECLAIM. The server then examines the percentage of reclaimable space for each volume in the storage pool. If the percentage of reclaimable space on a volume is greater that the reclamation threshold of the storage pool, the volume is a candidate for reclamation.

For example, suppose storage pool FILEPOOL has a reclamation threshold of 70 percent. This value indicates that the server can reclaim any volume in the storage pool that has a percentage of reclaimable space that is greater that 70 percent. The storage pool has three volumes:
  • FILEVOL1 with 65 percent reclaimable space
  • FILEVOL2 with 80 percent reclaimable space
  • FILEVOL3 with 95 percent reclaimable space

When reclamation begins, the server compares the percent of reclaimable space for each volume with the reclamation threshold of 70 percent. In this example, FILEVOL2 and FILEVOL3 are candidates for reclamation because their percentages of reclaimable space are greater than 70. To determine the percentage of reclaimable space for a volume, issue the QUERY VOLUME command and specify FORMAT=DETAILED. The value in the field Pct. Reclaimable Space is the percentage of reclaimable space for the volume.

Specify a value of 50 percent or greater for this parameter so that files stored on two volumes can be combined onto a single output volume.

Restriction: This parameter is not available for storage pools that use the following data formats:
  • NETAPPDUMP
  • CELERRADUMP
  • NDMPDUMP
RECLAIMPRocess
Specifies the number of parallel processes to use for reclaiming the volumes in this storage pool. This parameter is optional. Enter a value from 1 to 999. The default value is 1.

When calculating the value for this parameter, consider the number of sequential storage pools that will be involved with the reclamation and the number of logical and physical drives that can be dedicated to the operation. To access a sequential access volume, IBM Tivoli Storage Manager uses a mount point and, if the device type is not FILE, a physical drive. The number of available mount points and drives depends on other Tivoli Storage Manager and system activity and on the mount limits of the device classes for the sequential access storage pools that are involved in the reclamation.

For example, suppose that you want to reclaim the volumes from two sequential storage pools simultaneously and that you want to specify four processes for each of the storage pools. The storage pools have the same device class. Assuming that the RECLAIMSTGPOOL parameter is not specified or that the reclaim storage pool has the same device class as the storage pool that is being reclaimed, each process requires two mount points and, if the device type is not FILE, two drives. (One of the drives is for the input volume, and the other drive is for the output volume.) To run eight reclamation processes simultaneously, you need a total of at least 16 mount points and 16 drives. The device class for the storage pools must have a mount limit of at least 16.

If the number of reclamation processes you specify is more than the number of available mount points or drives, the processes that do not obtain mount points or drives will wait for mount points or drives to become available. If mount points or drives do not become available within the MOUNTWAIT time, the reclamation processes will end. For information about specifying the MOUNTWAIT time, see DEFINE DEVCLASS (Define a device class).

The Tivoli Storage Manager server will start the specified number of reclamation processes regardless of the number of volumes that are eligible for reclamation. For example, if you specify ten reclamation processes and only six volumes are eligible for reclamation, the server will start ten processes and four of them will complete without processing a volume.

Restriction: This parameter is not available for storage pools that use the following data formats:
  • NETAPPDUMP
  • CELERRADUMP
  • NDMPDUMP
RECLAIMSTGpool
Specifies another primary storage pool as a target for reclaimed data from this storage pool. This parameter is optional. When the server reclaims volumes for the storage pool, the server moves unexpired data from the volumes that are being reclaimed to the storage pool named with this parameter.

A reclaim storage pool is most useful for a storage pool that has only one drive in its library. When you specify this parameter, the server moves all data from reclaimed volumes to the reclaim storage pool regardless of the number of drives in the library.

To move data from the reclaim storage pool back to the original storage pool, use the storage pool hierarchy. Specify the original storage pool as the next storage pool for the reclaim storage pool.

Restriction:
  • This parameter is not available for storage pools that use the following data formats:
  • NETAPPDUMP
  • CELERRADUMP
  • NDMPDUMP
RECLAMATIONType
Specifies the method by which volumes are reclaimed and managed. This parameter is optional. The default value is THRESHOLD. Possible values are the following:
THRESHold
Specifies that volumes that belong to this storage pool are reclaimed based on the threshold value in the RECLAIM attribute for this storage pool.
SNAPlock
Specifies that FILE volumes that belong to this storage pool are managed for retention using NetApp Data ONTAP software and NetApp SnapLock volumes. This parameter is only valid for storage pools that are being defined to a server that has data retention protection enabled and that is assigned to a FILE device class. Volumes in this storage pool are not reclaimed based on threshold; the RECLAIM value for the storage pool is ignored.

All volumes in this storage pool are created as FILE volumes. A retention date, which is derived from the retention attributes in the archive copy group for the storage pool, is set in the metadata for the FILE volume using the SnapLock feature of the NetApp Data ONTAP operating system. Until the retention date expires, the FILE volume and any data on it cannot be deleted from the physical SnapLock volume on which it is stored.

The RECLAMATIONTYPE parameter for all storage pools that are being defined must be the same when defined to the same device class name. The DEFINE command can fail if the RECLAMATIONTYPE parameter specified is different from what is currently defined for storage pools that are already defined to the device class name.

Restriction: This parameter is not available for storage pools that use the following data formats:
  • NETAPPDUMP
  • CELERRADUMP
  • NDMPDUMP
COLlocate
Specifies whether the server attempts to keep data, stored on as few volumes as possible, that belong to one of the following candidates:
  • A single client node
  • A group of file spaces
  • A group of client nodes
  • A client file space
This parameter is optional. The default value is GROUP.

Collocation reduces the number of sequential access media mounts for restore, retrieve, and recall operations. However, collocation increases both the amount of server time that is needed to collocate files for storing and the number of volumes required. Collocation can also impact the number of processes migrating disks to sequential pool.

You can specify one of the following options:
No
Specifies that collocation is disabled. During migration from disk, processes are created at a file space level.
GRoup
Specifies that collocation is enabled at the group level for client nodes or file spaces. For collocation groups, the server attempts to put data for nodes or file spaces that belong to the same collocation group on as few volumes as possible.

If you specify COLLOCATE=GROUP but do not define any collocation groups, or if you do not add nodes or file spaces to a collocation group, data is collocated by node. Consider tape usage when you organize client nodes or file spaces into collocation groups.

For example, if a tape-based storage pool consists of data from nodes and you specify COLLOCATE=GROUP, the server completes the following actions:
  • Collocates the data by group for grouped nodes. Whenever possible, the server collocates data that belongs to a group of nodes on a single tape or on as few tapes as possible. Data for a single node can also be spread across several tapes that are associated with a group.
  • Collocates the data by node for ungrouped nodes. Whenever possible, the server stores the data for a single node on a single tape. All available tapes that already have data for the node are used before available space on any other tape is used.
  • During migration from disk, the server creates migration processes at the collocation group level for grouped nodes, and at the node level for ungrouped nodes.
If a tape-based storage pool consists of data from grouped file spaces and you specify COLLOCATE=GROUP, the server completes the following actions:
  • Collocates by group, the data for grouped file spaces only. Whenever possible, the server collocates data that belongs to a group of file spaces on a single tape or on as few tapes as possible. Data for a single file space can also be spread across several tapes that are associated with a group.
  • Collocates the data by node (for file spaces that are not explicitly defined to a file space collocation group). For example, node1 has file spaces named A, B, C, D, and E. File spaces A and B belong to a filespace collocation group but C, D, and E do not. File spaces A and B are collocated by filespace collocation group, while C, D, and E are collocated by node.
  • During migration from disk, the server creates migration processes at the collocation group level for grouped file spaces.

Data is collocated on the least amount of sequential access volumes.

NODe
Specifies that collocation is enabled at the client node level. For collocation groups, the server attempts to put data for one node on as few volumes as possible. If the node has multiple file spaces, the server does not try to collocate those file spaces. For compatibility with an earlier version, COLLOCATE=YES is still accepted by the server to specify collocation at the client node level.

If a storage pool contains data for a node that is a member of a collocation group and you specify COLLOCATE=NODE, the data is collocated by node.

For COLLOCATE=NODE, the server creates processes at the node level when you migrate data from disk.

FIlespace
Specifies that collocation is enabled at the file space level for client nodes. The server attempts to place data for one node and file space on as few volumes as possible. If a node has multiple file spaces, the server attempts to place data for different file spaces on different volumes.

For COLLOCATE=FILESPACE, the server creates processes at the file space level when you migrate data from disk.

MAXSCRatch (Required)
Specifies the maximum number of scratch volumes that the server can request for this storage pool. You can specify an integer from 0 to 100000000. By allowing the server to request scratch volumes, you avoid having to define each volume to be used.

The value specified for this parameter is used to estimate the total number of volumes available in the storage pool and the corresponding estimated capacity for the storage pool.

Scratch volumes are automatically deleted from the storage pool when they become empty. When scratch volumes with the device type of FILE are deleted, the space that the volumes occupied is freed by the server and returned to the file system.

Tip: For server-to-server operations that use virtual volumes and that store a small amount of data, consider specifying a value for the MAXSCRATCH parameter that is higher than the value you typically specify for write operations to other types of volumes. After a write operation to a virtual volume, Tivoli Storage Manager marks the volume as FULL, even if the value of the MAXCAPACITY parameter on the device-class definition has not been reached. The Tivoli Storage Manager server does not keep virtual volumes in FILLING status and does not append to them. If the value of the MAXSCRATCH parameter is too low, server-to-server operations can fail.
REUsedelay
Specifies the number of days that must elapse after all files are deleted from a volume before the volume can be rewritten or returned to the scratch pool. This parameter is optional. You can specify an integer from 0 to 9999. The default value is 0, which means that a volume can be rewritten or returned to the scratch pool as soon as all the files are deleted from the volume.
Important: Use this parameter to help ensure that when you restore the database to an earlier level, database references to files in the storage pool are still valid. You must set this parameter to a value greater than the number of days you plan to retain the oldest database backup. The number of days that are specified for this parameter must be the same as the number specified for the SET DRMDBBACKUPEXPIREDAYS command.
OVFLOcation
Specifies the overflow location for the storage pool. The server assigns this location name to a volume that is ejected from the library by the command. This parameter is optional. The location name can be a maximum length of 255 characters. Enclose the location name in quotation marks if the location name contains any blank characters.
Restriction: This parameter is not available for storage pools that use the following data formats:
  • NETAPPDUMP
  • CELERRADUMP
  • NDMPDUMP
MIGDelay
Specifies the minimum number of days a file must remain in a storage pool before it becomes eligible for migration. All files on a volume must be eligible for migration before the server selects the volume for migration. To calculate a value to compare to the specified MIGDELAY, the server counts the number of days that the file has been in the storage pool.

This parameter is optional. You can specify an integer from 0 to 9999. The default is 0, which means that you do not want to delay migration. If you want the server to count the number of days based only on when a file was stored and not when it was retrieved, use the NORETRIEVEDATE server option.

Restriction: This parameter is not available for storage pools that use the following data formats:
  • NETAPPDUMP
  • CELERRADUMP
  • NDMPDUMP
MIGContinue
Specifies whether you allow the server to migrate files that do not satisfy the migration delay time. This parameter is optional. The default is YES.

Because you can require that files remain in the storage pool for a minimum number of days, the server may migrate all eligible files to the next storage pool yet not meet the low migration threshold. This parameter allows you to specify whether the server is allowed to continue the migration process by migrating files that do not satisfy the migration delay time.

Possible values are:
Yes
Specifies that, when necessary to meet the low migration threshold, the server continues to migrate files that do not satisfy the migration delay time.

If you allow more than one migration process for the storage pool, some files that do not satisfy the migration delay time may be migrated unnecessarily. As one process migrates files that satisfy the migration delay time, a second process could begin migrating files that do not satisfy the migration delay time to meet the low migration threshold. The first process that is still migrating files that satisfy the migration delay time might have, by itself, caused the low migration threshold to be met.

No
Specifies that the server stops migration when no eligible files remain to be migrated, even before reaching the low migration threshold. The server does not migrate files unless the files satisfy the migration delay time.
MIGPRocess
Specifies the number of parallel processes to use for migrating the files from the volumes in this storage pool. This parameter is optional. Enter a value from 1 to 999. The default value is 1.

When calculating the value for this parameter, consider the number of sequential storage pools that will be involved with the migration, and the number of logical and physical drives that can be dedicated to the operation. To access a sequential-access volume, Tivoli Storage Manager uses a mount point and, if the device type is not FILE, a physical drive. The number of available mount points and drives depends on other Tivoli Storage Manager and system activity and on the mount limits of the device classes for the sequential access storage pools that are involved in the migration.

For example, suppose you want to simultaneously migrate the files from volumes in two primary sequential storage pools and that you want to specify three processes for each of the storage pools. The storage pools have the same device class. Assuming that the storage pool to which files are being migrated has the same device class as the storage pool from which files are being migrated, each process requires two mount points and, if the device type is not FILE, two drives. (One drive is for the input volume, and the other drive is for the output volume.) To run six migration processes simultaneously, you need a total of at least 12 mount points and 12 drives. The device class for the storage pools must have a mount limit of at least 12.

If the number of migration processes you specify is more than the number of available mount points or drives, the processes that do not obtain mount points or drives will wait for mount points or drives to become available. If mount points or drives do not become available within the MOUNTWAIT time, the migration processes will end. For information about specifying the MOUNTWAIT time, see DEFINE DEVCLASS (Define a device class).

The Tivoli Storage Manager server will start the specified number of migration processes regardless of the number of volumes that are eligible for migration. For example, if you specify ten migration processes and only six volumes are eligible for migration, the server will start ten processes and four of them will complete without processing a volume.

Tip: When you specify this parameter, consider whether the simultaneous-write function is enabled for server data migration. Each migration process requires a mount point and a drive for each copy storage pool and active-data pool that is defined to the target storage pool.
Restriction: This parameter is not available for storage pools that use the following data formats:
  • NETAPPDUMP
  • CELERRADUMP
  • NDMPDUMP
DATAFormat
Specifies the data format to use to back up files to this storage pool and restore files from this storage pool. The default format is the NATIVE server format. Possible values are:
NATive
Specifies the data format is the native Tivoli Storage Manager server format and includes block headers.
NONblock
Specifies the data format is the native Tivoli Storage Manager server format and does not include block headers.
Important: The default minimum block size on a volume associated with a FILE device class is 256 KB, regardless how much data is being written to the volume. For certain tasks (for example, using content-management products, using the DIRMC client option to store directory information, or migrating very small files using Tivoli Storage Manager for Space Management or Tivoli Storage Manager HSM for Windows), you can minimize wasted space on storage volumes by specifying the NONBLOCK data format. In most situations, however, the NATIVE format is preferred.
NETAPPDump
Specifies the data is in a NetApp dump format. This data format should be specified for file system images that are in a dump format and that have been backed up from a NetApp or an IBM System Storage N Series file server using NDMP. The server will not complete migration, reclamation, or AUDIT VOLUME for a storage pool with DATAFORMAT=NETAPPDUMP. You can use the MOVE DATA command to move data from one primary storage pool to another, or out of a volume if the volume must be reused.
CELERRADump
Specifies that the data is in an EMC Celerra dump format. This data format should be specified for file system images that are in a dump format and that have been backed up from an EMC Celerra file server using NDMP. The server will not complete migration, reclamation, or AUDIT VOLUME for a storage pool with DATAFORMAT=CELERRADUMP. You can use the MOVE DATA command to move data from one primary storage pool to another, or out of a volume if the volume must be reused.
NDMPDump
Specifies that the data is in NAS vendor-specific backup format. Use this data format for file system images that have been backed up from a NAS file server other than a NetApp or EMC Celerra file server. The server will not complete migration, reclamation, or AUDIT VOLUME for a storage pool with DATAFORMAT=NDMPDUMP. You can use the MOVE DATA command to move data from one primary storage pool to another, or out of a volume if the volume must be reused.
AUTOCopy
Specifies when Tivoli Storage Manager completes simultaneous-write operations. The default value is CLIENT. This parameter is optional and affects the following operations:
  • Client store sessions
  • Server import processes
  • Server data-migration processes

If an error occurs while data is being simultaneously written to a copy storage pool or active-data pool during a migration process, the server stops writing to the failing storage pools for the remainder of the process. However, the server continues to store files into the primary storage pool and any remaining copy storage pools or active-data pools. These pools remain active for the duration of the migration process. Copy storage pools are specified using the COPYSTGPOOLS parameter. Active-data pools are specified using the ACTIVEDATAPOOLS parameter.

Possible values are:
None
Specifies that the simultaneous-write function is disabled.
CLient
Specifies that data is written simultaneously to copy storage pools and active-data pools during client store sessions or server import processes. During server import processes, data is written simultaneously to only copy storage pools. Data is not written to active-data pools during server import processes.
MIGRation
Specifies that data is written simultaneously to copy storage pools and active-data pools only during migration to this storage pool. During server data-migration processes, data is written simultaneously to copy storage pools and active-data pools only if the data does not exist in those pools. Nodes whose data is being migrated must be in a domain associated with an active-data pool. If the nodes are not in a domain associated with an active pool, the data cannot be written to the pool.
All
Specifies that data is written simultaneously to copy storage pools and active-data pools during client store sessions, server import processes, or server data-migration processes. Specifying this value ensures that data is written simultaneously whenever this pool is a target for any of the eligible operations.
COPYSTGpools
Specifies the names of copy storage pools where the server simultaneously writes data. The COPYSTGPOOLS parameter is optional. You can specify a maximum of three copy pool names that are separated by commas. (In versions earlier than Version 5 Release 3, the maximum number was ten.) Spaces between the names of the copy pools are not permitted. When specifying a value for the COPYSTGPOOLS parameter, you can also specify a value for the COPYCONTINUE parameter.

The combined total number of storage pools that are specified in the COPYSTGPOOLS and ACTIVEDATAPOOLS parameters cannot exceed three.

When a data storage operation switches from a primary storage pool to a next storage pool, the next storage pool inherits the list of copy storage pools and the COPYCONTINUE value from the primary storage pool. The primary storage pool is specified by the copy group of the management class that is bound to the data.

The server can write data simultaneously to copy storage pools during the following operations:
  • Back up and archive operations by Tivoli Storage Manager backup-archive clients or application clients using the Tivoli Storage Manager API
  • Migration operations by Tivoli Storage Manager for Space Management clients
  • Import operations that involve copying exported file data from external media to a storage pool defined with a copy storage pool list
Restrictions:
  1. This parameter is available only to primary storage pools that use NATIVE or NONBLOCK data format. This parameter is not available for storage pools that use the following data formats:
    • NETAPPDUMP
    • CELERRADUMP
    • NDMPDUMP
  2. Writing data simultaneously to copy storage pools is not supported when using LAN-free data movement. Simultaneous-write operations take precedence over LAN-free data movement, causing the operations to go over the LAN. However, the simultaneous-write configuration is accepted.
  3. The simultaneous-write function is not supported for NAS backup operations. If the primary storage pool specified in the DESTINATION or TOCDESTINATION in the copy group of the management class has copy storage pools defined, the copy storage pools are ignored and the data is stored into the primary storage pool only.
  4. You cannot use the simultaneous-write function with Centera storage devices.
Attention: The function that is provided by the COPYSTGPOOLS parameter is not intended to replace the BACKUP STGPOOL command. If you use the COPYSTGPOOLS parameter, continue to use the BACKUP STGPOOL command to ensure that the copy storage pools are complete copies of the primary storage pool. There are cases when a copy may not be created. For more information, see the COPYCONTINUE parameter description.
COPYContinue
Specifies how the server should react to a copy storage pool write failure for any of the copy storage pools listed in the COPYSTGPOOLS parameter. This parameter is optional. The default value is YES. When specifying the COPYCONTINUE parameter, you must also specify the COPYSTGPOOLS parameter.

The COPYCONTINUE parameter has no effect on the simultaneous-write function during migration.

Possible values are:
Yes
If the COPYCONTINUE parameter is set to YES, the server will stop writing to the failing copy pools for the remainder of the session, but continue storing files into the primary pool and any remaining copy pools. The copy storage pool list is active only for the life of the client session and applies to all the primary storage pools in a particular storage pool hierarchy.
No
If the COPYCONTINUE parameter is set to NO, the server will fail the current transaction and discontinue the store operation.
Restrictions:
  • The setting of the COPYCONTINUE parameter does not affect active-data pools. If a write failure occurs for any of the active-data pools, the server stops writing to the failing active-data pool for the remainder of the session, but continues storing files into the primary pool and any remaining active-data pools and copy storage pools. The active-data pool list is active only for the life of the session and applies to all the primary storage pools in a particular storage pool hierarchy.
  • The setting of the COPYCONTINUE parameter does not affect the simultaneous-write function during server import. If data is being written simultaneously and a write failure occurs to the primary storage pool or any copy storage pool, the server import process fails.
  • The setting of the COPYCONTINUE parameter does not affect the simultaneous-write function during server data migration. If data is being written simultaneously and a write failure occurs to any copy storage pool or active-data pool, the failing storage pool is removed and the data migration process continues. Write failures to the primary storage pool cause the migration process to fail.
Restriction: This parameter is not available for storage pools that use the following data formats:
  • NETAPPDUMP
  • CELERRADUMP
  • NDMPDUMP
ACTIVEDATApools
Specifies the names of active-data pools where the server simultaneously writes data during a client backup operation. The ACTIVEDATAPOOLS parameter is optional. Spaces between the names of the active-data pools are not permitted.

The combined total number of storage pools specified in the COPYSGTPOOLS and ACTIVEDATAPOOLS parameters cannot exceed three.

When a data storage operation switches from a primary storage pool to a next storage pool, the next storage pool inherits the list of active-data pools from the destination storage pool specified in the copy group. The primary storage pool is specified by the copy group of the management class that is bound to the data.

The server can write data simultaneously to active-data pools only during backup operations by Tivoli Storage Manager backup-archive clients or application clients using the Tivoli Storage Manager API.
Restrictions:
  1. This parameter is available only to primary storage pools that use NATIVE or NONBLOCK data format. This parameter is not available for storage pools that use the following data formats:
    • NETAPPDUMP
    • CELERRADUMP
    • NDMPDUMP
  2. Write data simultaneously to active-data pools is not supported when using LAN-free data movement. Simultaneous-write operations take precedence over LAN-free data movement, causing the operations to go over the LAN. However, the simultaneous-write configuration is accepted.
  3. The simultaneous-write function is not supported when a NAS backup operation is writing a TOC file. If the primary storage pool specified in the TOCDESTINATION in the copy group of the management class has active-data pools defined, the active-data pools are ignored, and the data is stored into the primary storage pool only.
  4. You cannot use the simultaneous-write function with Centera storage devices.
  5. Data being imported will not be stored in active-data pools. After an import operation, use the COPY ACTIVEDATA command to store the imported data in an active-data pool.
Attention: The function provided by the ACTIVEDATAPOOLS parameter is not intended to replace the COPY ACTIVEDATA command. If you use the ACTIVEDATAPOOLS parameter, use the COPY ACTIVEDATA command to ensure that the active-data pools contain all active data of the primary storage pool.
DEDUPlicate
Specifies whether the data that is stored in this storage pool will be deduplicated. This parameter is optional and is valid only for storage pools that are defined with a FILE-type device class. The default value is NO.
IDENTIFYPRocess
Specifies the number of parallel processes to use for server-side duplicate identification. This parameter is optional and is valid only for storage pools that are defined with a FILE device class. Enter a value from 0 to 50. The default value is 1. If the value of the DEDUPLICATE parameter is NO, the default setting for IDENTIFYPROCESS has no effect.

When calculating the value for this parameter, consider the workload on the server and the amount of data requiring data deduplication. Server-side duplicate identification requires disk I/O and processor resources, so the more processes you allocate to data deduplication, the heavier the workload that you place on your system. In addition, consider the number of volumes that require processing. Server-side duplicate-identification processes work on volumes containing data that requires deduplication. If you update a storage pool, specifying that the data in the storage pool is to be deduplicated, all the volumes in the pool require processing. For this reason, you might have to define a high number of duplicate-identification processes initially. Over time, however, as existing volumes are processed, only the volumes containing new data have to be processed. When that happens, you can reduce the number of duplicate-identification processes.

Remember: Duplicate-identification processes can be either active or idle. Processes that are working on files are active. Processes that are waiting for files to work on are idle. Processes remain idle until volumes with data to be deduplicated become available. The output of the QUERY PROCESS command for a duplicate-identification process includes the total number of bytes and files that have been processed since the process first started. For example, if a duplicate-identification process processes four files, becomes idle, and then processes five more files, then the total number of files processed is nine. Processes end only when canceled or when the number of duplicate-identification processes for the storage pool is changed to a value less than the number currently specified.
AIX operating systems Linux operating systems Sun Solaris operating systems Windows operating systems

Example: Define a primary storage pool with an 8MMTAPE device class

Define a primary storage pool named 8MMPOOL to the 8MMTAPE device class (with a device type of 8MM) with a maximum file size of 5 MB. Store any files larger than 5 MB in subordinate pools, beginning with POOL1. Enable collocation of files for client nodes. Allow as many as 5 scratch volumes for this storage pool.
define stgpool 8mmpool 8mmtape maxsize=5m
 nextstgpool=pool1 collocate=node
 maxscratch=5
HP-UX operating systems

Example: Define a primary storage pool with a TAPE8MM device class

Define a primary storage pool named TAPEPOOL to the TAPE8MM device class (with a device type of GENERICTAPE) with a maximum file size of 5 MB. Store any files larger than 5 MB in subordinate pools, beginning with POOL1. Enable collocation for the storage pool. Allow as many as 5 scratch volumes for this storage pool.
define stgpool tapepool tape8mm maxsize=5m
 nextstgpool=pool1 collocate=node
 maxscratch=5


Feedback