IBM Tivoli Storage Manager, Version 7.1

DEFINE STGPOOL (Define a primary storage pool assigned to random access devices)

Use this command to define a primary storage pool that is assigned to random access devices.

Privilege class

To issue this command, you must have system privilege.

Syntax

Read syntax diagramSkip visual syntax diagram
                                    .-POoltype--=--PRimary-.   
>>-DEFine STGpool--pool_name--DISK--+----------------------+---->
                                    '-POoltype--=--PRimary-'   

>--+-----------------------------+------------------------------>
   '-DESCription--=--description-'   

   .-ACCess--=--READWrite-------.   
>--+----------------------------+------------------------------->
   '-ACCess--=--+-READWrite---+-'   
                +-READOnly----+     
                '-UNAVailable-'     

   .-MAXSIze--=--NOLimit-----------.  .-CRCData--=--No------.   
>--+-------------------------------+--+---------------------+--->
   '-MAXSIze--=--maximum_file_size-'  '-CRCData--=--+-Yes-+-'   
                                                    '-No--'     

                                  .-HIghmig--=--90------.   
>--+---------------------------+--+---------------------+------->
   '-NEXTstgpool--=--pool_name-'  '-HIghmig--=--percent-'   

   .-LOwmig--=--70------.  .-CAChe--=--No------.   
>--+--------------------+--+-------------------+---------------->
   '-LOwmig--=--percent-'  '-CAChe--=--+-Yes-+-'   
                                       '-No--'     

   .-MIGPRocess--=--1------.  .-MIGDelay--=--0----.   
>--+-----------------------+--+-------------------+------------->
   '-MIGPRocess--=--number-'  '-MIGDelay--=--days-'   

   .-MIGContinue--=--Yes-----.   
>--+-------------------------+---------------------------------->
   '-MIGContinue--=--+-Yes-+-'   
                     '-No--'     

   .-AUTOCopy--=--CLient--------.   
>--+----------------------------+------------------------------->
   '-AUTOCopy--=--+-None------+-'   
                  +-CLient----+     
                  +-MIGRation-+     
                  '-All-------'     

>--+-------------------------------------------------------------------+-->
   |                  .-,--------------.                               |   
   |                  V                |  .-COPYContinue--=--Yes-----. |   
   '-COPYSTGpools--=----copy_pool_name-+--+--------------------------+-'   
                                          '-COPYContinue--=--+-Yes-+-'     
                                                             '-No--'       

>--+-----------------------------------------------+------------>
   |                     .-,---------------------. |   
   |                     V                       | |   
   '-ACTIVEDATApools--=----active-data_pool_name-+-'   

   .-SHRED--=--0-----------------------.   
>--+-----------------------------------+-----------------------><
   |                           (1) (2) |   
   '-SHRED--=--overwrite_count---------'   

Notes:
  1. This parameter is not available for Centera or SnapLock storage pools.
  2. Linux operating systems This parameter is not available for SnapLock storage pools.

Parameters

pool_name (Required)
Specifies the name of the storage pool to be defined. The name must be unique, and the maximum length is 30 characters.
DISK (Required)
Specifies that you want to define a storage pool to the DISK device class (the DISK device class is predefined during installation).
POoltype=PRimary
Specifies that you want to define a primary storage pool. This parameter is optional. The default value is PRIMARY.
DESCription
Specifies a description of the storage pool. This parameter is optional. The maximum length of the description is 255 characters. Enclose the description in quotation marks if it contains any blank characters.
ACCess
Specifies how client nodes and server processes (such as migration and reclamation) can access files in the storage pool. This parameter is optional. The default value is READWRITE. Possible values are:
READWrite
Specifies that client nodes and server processes can read and write to files stored on volumes in the storage pool.
READOnly
Specifies that client nodes can only read files from the volumes in the storage pool.

Server processes can move files within the volumes in the storage pool. However, no new writes are permitted to volumes in the storage pool from volumes outside the storage pool.

If this storage pool has been specified as a subordinate storage pool (with the NEXTSTGPOOL parameter) and is defined as readonly, the storage pool is skipped when server processes attempt to write files to the storage pool.

UNAVailable
Specifies that client nodes cannot access files stored on volumes in the storage pool.

Server processes can move files within the volumes in the storage pool and can also move or copy files from this storage pool to another storage pool. However, no new writes are permitted to volumes in the storage pool from volumes outside the storage pool.

If this storage pool has been specified as a subordinate storage pool (with the NEXTSTGPOOL parameter) and is defined as unavailable, the storage pool is skipped when server processes attempt to write files to the storage pool.

MAXSIze
Specifies the maximum size for a physical file that the server can store in the storage pool. This parameter is optional. The default value is NOLIMIT. Possible values are:
NOLimit
Specifies that there is no maximum size limit for physical files that are stored in the storage pool.
maximum_file_size
Limits the maximum physical file size. Specify an integer from 1 to 999999, followed by a scale factor. For example, MAXSIZE=5G specifies that the maximum file size for this storage pool is 5 GB. Scale factors are:
Scale factor Meaning
K kilobyte
M megabyte
G gigabyte
T terabyte

See the following table for information about where a file is stored when its size exceeds the MAXSIZE parameter.

Table 1. Where a file is stored according to the file size and the pool that is specified
File size Pool specified Result
Exceeds the maximum size No pool is specified as the next storage pool in the hierarchy The server does not store the file
A pool is specified as the next storage pool in the hierarchy The server stores the file in the next storage pool that can accept the file size

If you specify the next storage pool parameter, define one storage pool in your hierarchy to have no limit on the maximum file size. By having no limit on the size for at least one pool, you ensure that no matter what its size, the server can store the file.

For logical files that are part of an aggregate, the server considers the size of the aggregate to be the file size. Therefore, the server does not store logical files that are smaller than the maximum size limit if the files are part of an aggregate that is larger than the maximum size limit.

CRCData
Specifies whether a cyclic redundancy check (CRC) validates storage pool data when audit volume processing occurs on the server. This parameter is optional. The default value is NO. By setting CRCDATA to YES and scheduling an AUDIT VOLUME command you can continually ensure the integrity of data that is stored in your storage hierarchy. Possible values are:
Yes
Specifies that data is stored containing CRC information, allowing for audit volume processing to validate storage pool data. This mode impacts performance because more expenditure is required to calculate and compare CRC values between the storage pool and the server.
No
Specifies that data is stored without CRC information.
NEXTstgpool
Specifies a primary storage pool to which files are migrated. This parameter is optional.
If you do not specify a next storage pool, the following actions occur:
  • The server cannot migrate files from this storage pool
  • The server cannot store files that exceed the maximum size for this storage pool in another storage pool

You cannot create a chain of storage pools that leads to an endless loop through the NEXTSTGPOOL parameter. At least one storage pool in the hierarchy must have no value that is specified for NEXTSTGPOOL.

If you specify a sequential access pool as the NEXTSTGPOOL, the pool can be "NATIVE" or "NONBLOCK" data format only.

HIghmig
Specifies that the server starts migration for this storage pool when the amount of data in the pool reaches this percentage of the pool's estimated capacity. This parameter is optional. You can specify an integer from 0 to 100. The default value is 90.

When the storage pool exceeds the high migration threshold, the server can start migration of files by node, to the next storage pool. The NEXTSTGPOOL parameter defines this setting. You can specify HIGHMIG=100 to prevent migration for this storage pool.

LOwmig
Specifies that the server stops migration for this storage pool when the amount of data in the pool reaches this percentage of the pool's estimated capacity. This parameter is optional. You can specify an integer 0 to 99. The default value is 70.

When migration is by node or file space, depending upon collocation, the level of the storage pool can fall below the value that you specified for this parameter. To empty the storage pool, set LOWMIG=0.

CAChe
Specifies whether the migration process leaves a cached copy of a file in this storage pool after you migrate the file to the next storage pool. This parameter is optional. The default value is NO. Possible values are:
Yes
Specifies that caching is enabled.
No
Specifies that caching is disabled.

Using cache might improve the ability to retrieve files, but might affect the performance of other processes.

MIGPRocess
Specifies the number of processes that the server uses for migrating files from this storage pool. This parameter is optional. You can specify an integer from 1 to 999. The default value is 1.

During migration, these processes are run in parallel to provide the potential for improved migration rates.

Tips:
  • The number of migration processes is dependent upon the following settings:
    • The MIGPROCESS parameter
    • The collocation setting of the next pool
    • The number of nodes or the number of collocation groups with data in the storage pool that is being migrated
    For example, suppose that MIGPROCESS =6, the next pool COLLOCATE parameter is set to NODE, but there are only two nodes with data on the storage pool. Migration processing consists of only two processes, not six. If the COLLOCATE parameter is set to GROUP and both nodes are in the same group, migration processing consists of only one process. If the COLLOCATE parameter is set to NO or FILESPACE, and each node has two file spaces with backup data, then migration processing consists of four processes.
  • When you specify this parameter, consider whether the simultaneous-write function is enabled for server data migration. Each migration process requires a mount point and a drive for each copy storage pool and active-data pool that is defined to the target storage pool.
MIGDelay
Specifies the minimum number of days a file must remain in a storage pool before it becomes eligible for migration. To calculate a value to compare to the specified MIGDELAY value, the server counts the following items:
  • The number of days that the file was in the storage pool
  • The number of days, if any, since the file was retrieved by a client
The lesser of the two values are compared to the specified MIGDELAY value. For example, if all the following conditions are true, a file is not migrated:
  • A file was in a storage pool for five days.
  • The file was accessed by a client within the past three days.
  • The value that is specified for the MIGDELAY parameter is four days.

This parameter is optional. You can specify an integer from 0 to 9999. The default is 0, which means that you do not want to delay migration.

If you want the server to count the number of days that are based on when a file was stored and not when it was retrieved, use the NORETRIEVEDATE server option.

MIGContinue
Specifies whether you allow the server to migrate files that do not satisfy the migration delay time. This parameter is optional. The default is YES.

Because you can require that files remain in the storage pool for a minimum number of days, the server may migrate all eligible files to the next storage pool yet not meet the low migration threshold. This parameter allows you to specify whether the server is allowed to continue the migration process by migrating files that do not satisfy the migration delay time.

Possible values are:
Yes
Specifies that, when necessary to meet the low migration threshold, the server continues to migrate files that do not satisfy the migration delay time.

If you allow more than one migration process for the storage pool, some files that do not satisfy the migration delay time may be migrated unnecessarily. As one process migrates files that satisfy the migration delay time, a second process could begin migrating files that do not satisfy the migration delay time to meet the low migration threshold. The first process that is still migrating files that satisfy the migration delay time might have, by itself, caused the low migration threshold to be met.

No
Specifies that the server stops migration when no eligible files remain to be migrated, even before reaching the low migration threshold. The server does not migrate files unless the files satisfy the migration delay time.
AUTOCopy
Specifies when Tivoli® Storage Manager runs simultaneous-write operations. The default value is CLIENT. This parameter is optional and affects the following operations:
  • Client store sessions
  • Server import processes
  • Server data-migration processes

If an error occurs while data is being simultaneously written to a copy storage pool or active-data pool during a migration process, the server stops writing to the failing storage pools for the remainder of the process. However, the server continues to store files into the primary storage pool and any remaining copy storage pools or active-data pools. These pools remain active for the duration of the migration process. Copy storage pools are specified using the COPYSTGPOOLS parameter. Active-data pools are specified using the ACTIVEDATAPOOLS parameter.

Possible values are:
None
Specifies that the simultaneous-write function is disabled.
CLient
Specifies that data is written simultaneously to copy storage pools and active-data pools during client store sessions or server import processes. During server import processes, data is written simultaneously to only copy storage pools. Data is not written to active-data pools during server import processes.
MIGRation
Specifies that data is written simultaneously to copy storage pools and active-data pools only during migration to this storage pool. During server data-migration processes, data is written simultaneously to copy storage pools and active-data pools only if the data does not exist in those pools. Nodes whose data is being migrated must be in a domain associated with an active-data pool. If the nodes are not in a domain associated with an active pool, the data cannot be written to the pool.
All
Specifies that data is written simultaneously to copy storage pools and active-data pools during client store sessions, server import processes, or server data-migration processes. Specifying this value ensures that data is written simultaneously whenever this pool is a target for any of the eligible operations.
COPYSTGpools
Specifies the names of copy storage pools where the server simultaneously writes data. The COPYSTGPOOLS parameter is optional. You can specify a maximum of three copy pool names that are separated by commas. Spaces between the names of the copy pools are not allowed. When you specify a value for the COPYSTGPOOLS parameter, you can also specify a value for the COPYCONTINUE parameter.

The combined total number of storage pools that are specified in the COPYSGTPOOLS and ACTIVEDATAPOOLS parameters cannot exceed three.

When a data storage operation switches from a primary storage pool to a next storage pool, the next storage pool inherits the list of copy storage pools and the COPYCONTINUE value from the primary storage pool. The primary storage pool is specified by the copy group of the management class that is bound to the data.

The server can write data simultaneously to copy storage pools during the following operations:
  • Back up and archive operations by Tivoli Storage Manager backup-archive clients or application clients that are using the Tivoli Storage Manager API
  • Migration operations by Tivoli Storage Manager for Space Management clients
  • Import operations that involve copying exported file data from external media to a primary storage pool associated with a copy storage pool list
Restriction: The simultaneous-write function is not supported for the following store operations:
  • When the operation is using LAN-free data movement. Simultaneous-write operations take precedence over LAN-free data movement, causing the operations to go over the LAN. However, the simultaneous-write configuration is followed.
  • NAS backup operations. If the primary storage pool specified in the DESTINATION or TOCDESTINATION in the copy group of the management class has copy storage pools that are defined:
    • The copy storage pools are ignored
    • The data is stored into the primary storage pool only
Attention: The function that is provided by the COPYSTGPOOLS parameter is not intended to replace the BACKUP STGPOOL command. If you use the COPYSTGPOOLS parameter, continue to use the BACKUP STGPOOL command to ensure that the copy storage pools are complete copies of the primary storage pool. There are cases when a copy might not be created. For more information, see the COPYCONTINUE parameter description.
COPYContinue
Specifies how the server usually reacts to a copy storage pool write failure for any of the copy storage pools that are listed in the COPYSTGPOOLS parameter. This parameter is optional. The default value is YES. When you specify the COPYCONTINUE parameter, you must also specify the COPYSTGPOOLS parameter.
Possible values are:
Yes
If the COPYCONTINUE parameter is set to YES, the server will stop writing to the failing copy pools for the remainder of the session, but continue storing files into the primary pool and any remaining copy pools. The copy storage pool list is active only for the life of the client session and applies to all the primary storage pools in a particular storage pool hierarchy.
No
If the COPYCONTINUE parameter is set to NO, the server will fail the current transaction and discontinue the store operation.
Restrictions:
  • The setting of the COPYCONTINUE parameter does not affect active-data pools. If a write failure occurs for any of the active-data pools, the server stops writing to the failing active-data pool for the remainder of the session, but continues storing files into the primary pool and any remaining active-data pools and copy storage pools. The active-data pool list is active only for the life of the session and applies to all the primary storage pools in a particular storage pool hierarchy.
  • The setting of the COPYCONTINUE parameter does not affect the simultaneous-write function during server import. If data is being written simultaneously and a write failure occurs to the primary storage pool or any copy storage pool, the server import process fails.
  • The setting of the COPYCONTINUE parameter does not affect the simultaneous-write function during server data migration. If data is being written simultaneously and a write failure occurs to any copy storage pool or active-data pool, the failing storage pool is removed and the data migration process continues. Write failures to the primary storage pool cause the migration process to fail.
ACTIVEDATApools
Specifies the names of active-data pools where the server simultaneously writes data during a client backup operation. The ACTIVEDATAPOOLS parameter is optional. Spaces between the names of the active-data pools are not allowed.

The combined total number of storage pools that are specified in the COPYSGTPOOLS and ACTIVEDATAPOOLS parameters cannot exceed three.

When a data storage operation switches from a primary storage pool to a next storage pool, the next storage pool inherits the list of active-data pools from the destination storage pool that is specified in the copy group. The primary storage pool is specified by the copy group of the management class that is bound to the data.

The server can write data simultaneously to active-data pools only during backup operations by Tivoli Storage Manager backup-archive clients or application clients that use the Tivoli Storage Manager API.
Restrictions:
  1. This parameter is available only to primary storage pools that use "NATIVE" or "NONBLOCK" data format. This parameter is not available for storage pools that use the following data formats:
    • NETAPPDUMP
    • CELERRADUMP
    • NDMPDUMP
  2. Writing data simultaneously to active-data pools is not supported when you use LAN-free data movement. Simultaneous-write operations take precedence over LAN-free data movement, causing the operations to go over the LAN. However, the simultaneous-write configuration is followed.
  3. The simultaneous-write function is not supported when a NAS backup operation is writing a TOC file. If the primary storage pool specified in the TOCDESTINATION in the copy group of the management class has active-data pools that are defined:
    • The active-data pools are ignored
    • The data is stored into the primary storage pool only
  4. You cannot use the simultaneous-write function with Centera storage devices.
  5. Data that is being imported is not stored in active-data pools. After an import operation, use the COPY ACTIVEDATA command to store the imported data in an active-data pool.
Attention: The function that is provided by the ACTIVEDATAPOOLS parameter is not intended to replace the COPY ACTIVEDATA command. If you use the ACTIVEDATAPOOLS parameter, use the COPY ACTIVEDATA command to ensure that the active-data pools contain all active data of the primary storage pool.
SHRED
Specifies whether data is physically overwritten when it is deleted. This parameter is optional. You can specify an integer from 0 to 10. The default value is 0.

If you specify a value of zero, the Tivoli Storage Manager server deletes the data from the database. However, the storage that is used to contain the data is not overwritten, and the data exists in storage until that storage is reused for other data. It might be possible to discover and reconstruct the data after it is deleted.

If you specify a value greater than zero, the Tivoli Storage Manager server deletes the data both logically and physically. The server overwrites the storage that is used to contain the data the specified number of times. This overwriting increases the difficulty of discovering and reconstructing the data after it is deleted.

To ensure that all copies of the data are shredded, specify a SHRED value greater than zero for the storage pool that is specified in the NEXTSTGPOOL parameter. Do not specify either the COPYSTGPOOLS or ACTIVEDATAPOOLS. Specifying relatively high values for the overwrite count generally improves the level of security, but might affect performance adversely.

Overwriting of deleted data is done asynchronously after the delete operation is complete. Therefore, the space that is occupied by the deleted data remains occupied for some time. The space is not available as free space for new data.

A SHRED value greater than zero cannot be used if the value of the CACHE parameter is YES.

Important: After an export operation finishes and identifies files for export, any change to the storage pool SHRED value is ignored. An export operation that is suspended retains the original SHRED value throughout the operation. You might want to consider canceling your export operation if changes to the storage pool SHRED value jeopardize the operation. You can reissue the export command after any needed cleanup.

Example: Define a primary storage pool for a DISK device class

Define a primary storage pool, POOL1, to use the DISK device class, with caching enabled. Limit the maximum file size to 5 MB. Store any files larger than 5 MB in subordinate storage pools that begin with the PROG2 storage pool. Set the high migration threshold to 70 percent, and the low migration threshold to 30 percent.
define stgpool pool1 disk
 description="main disk storage pool" maxsize=5m
 highmig=70 lowmig=30 cache=yes
 nextstgpool=prog2


Feedback