- pool_name (Required)
- Specifies the storage pool to update. This parameter is required.
- DESCription
- Specifies a description of the storage pool. This parameter is
optional. The maximum length of the description is 255 characters.
Enclose the description in quotation marks if it contains any blank
characters. To remove an existing description, specify a null string
("").
- ACCess
- Specifies how client nodes and server processes (such as migration
and reclamation) can access files in the storage pool. This parameter
is optional. Possible values are:
- READWrite
- Specifies that client nodes and server processes can read and
write to files stored on volumes in the storage pool.
- READOnly
- Specifies that client nodes can only read files from the volumes
in the storage pool.
Server processes can move files within the
volumes in the storage pool. However, no new writes are permitted
to volumes in the storage pool from volumes outside the storage pool.
If
this storage pool has been specified as a subordinate storage pool
(with the NEXTSTGPOOL parameter) and is defined
as readonly, the storage pool is skipped when server processes
attempt to write files to the storage pool.
- UNAVailable
- Specifies that client nodes cannot access files stored on volumes
in the storage pool.
Server processes can move files within the
volumes in the storage pool and can also move or copy files from this
storage pool to another storage pool. However, no new writes are permitted
to volumes in the storage pool from volumes outside the storage pool.
If
this storage pool has been specified as a subordinate storage pool
(with the NEXTSTGPOOL parameter) and is defined
as unavailable, the storage pool is skipped when server processes
attempt to write files to the storage pool.
- MAXSIze
- Specifies the maximum
size for a physical file that the server can store in the storage
pool. This parameter is optional. Possible values are:
- NOLimit
- Specifies that there is no maximum size limit for physical files
stored in the storage pool.
- maximum_file_size
- Limits the maximum physical file size. Specify an integer from
1 to 999999, followed by a scale factor. For example, MAXSIZE=5G specifies
that the maximum file size for this storage pool is 5 gigabytes. Scale
factors are:
Scale factor |
Meaning |
K |
kilobyte |
M |
megabyte |
G |
gigabyte |
T |
terabyte |
If a file exceeds the maximum size and no pool is
specified as the next storage pool in the hierarchy, the server does
not store the file. If a file exceeds the maximum size and a pool
is specified as the next storage pool, the server stores the file
in the next storage pool that can accept the file size. If you specify
the next storage pool parameter, at least one storage pool in your
hierarchy should have no limit on the maximum size of a file. By having
no limit on the size for at least one pool, you ensure that no matter
what its size, the server can store the file.
For
logical files that are part of an aggregate, the server considers
the size of the aggregate to be the file size. Therefore, the server
does not store logical files that are smaller than the maximum size
limit if the files are part of an aggregate that is larger than the
maximum size limit.
- CRCData
- Specifies whether a cyclic redundancy
check (CRC) validates storage pool data when audit volume processing
occurs on the server. This parameter is optional. The default value
is NO. By setting CRCDATA to YES and scheduling an AUDIT
VOLUME command you can continually ensure the integrity
of data that is stored in your storage hierarchy. Possible values are:
- Yes
- Specifies that data is stored containing CRC information, allowing
for audit volume processing to validate storage pool data. This mode
impacts performance because additional overhead is required to calculate
and compare CRC values between the storage pool and the server.
- No
- Specifies that data is stored without CRC information.
- NEXTstgpool
- Specifies a primary
storage pool to which files are migrated. This parameter is optional.
To remove an existing storage pool from the storage hierarchy,
specify a null string ("") for this value.
If this storage pool
does not have a next storage pool, the server cannot migrate files
from this storage pool and cannot store files that exceed the maximum
size for this storage pool in another storage pool.
You cannot
create a chain of storage pools that leads to an endless loop through
the NEXTSTGPOOL parameter. At least one storage pool in the hierarchy
must have no value specified for NEXTSTGPOOL.
If you specify
a sequential access pool as the NEXTSTGPOOL, the pool can only be NATIVE
or NONBLOCK dataformat.
- HIghmig
- Specifies that the server
starts migration for this storage pool when the amount of data in
the pool reaches this percentage of the pool's estimated capacity.
This parameter is optional. You can specify an integer from 0 to 100.
When the storage pool exceeds the high migration threshold, the
server can start migration of files by node to the next storage pool,
as defined with the NEXTSTGPOOL parameter. You
can specify HIGHMIG=100 to prevent migration for this storage pool.
- LOwmig
- Specifies that the server stops migration for this storage pool
when the amount of data in the pool reaches this percentage of the
pool's estimated capacity. You can specify an integer from 0 to 99
for this optional parameter.
Because migration
is by node or filespace (depending upon collocation), the occupancy
of the storage pool can fall below the value you specified for this
parameter. You can set LOWMIG=0 to empty the storage pool.
- CAChe
- Specifies whether the migration process
leaves a cached copy of a file in this storage pool after you migrate
the file to the next storage pool. This parameter is optional. Possible
values are:
- Yes
- Specifies that caching is enabled.
- No
- Specifies that caching is disabled.
Using cache might improve your ability to retrieve
files, but might affect the performance of other processes.
- MIGPRocess
- Specifies the number of processes
that are used for migrating files from this storage pool. This parameter
is optional. You can specify an integer from 1 to 999.
During migration,
these processes are run in parallel to provide the potential for improved
migration rates.
Tips: - The number of migration processes is dependent
upon the following parameters:
- The setting of the MIGPROCESS parameter
- The collocation setting of the next pool
- The number of nodes or the number of collocation groups with data
in the storage pool that is being migrated
For this example, MIGPROCESS =6, the next pool COLLOCATE parameter
is NODE, but there are only two nodes with data
on the storage pool. Migration processing consists of only two processes,
not six. If the COLLOCATE parameter is GROUP group
and both nodes are in the same group, migration processing consists
of only one process. If the COLLOCATE parameter
is NO or FILESPACE group,
and each node has two file spaces with backup data, then migration
processing consists of only four processes.
- When you specify this parameter, consider whether the simultaneous-write
function is enabled for server data migration. Each migration process
requires a mount point and a drive for each copy storage pool and
active-data pool that is defined to the target storage pool.
- MIGDelay
- Specifies the minimum number of days a file must remain in a storage
pool before it becomes eligible for migration. To calculate a value
to compare to the specified MIGDELAY value, the
server counts the number of days that the file was in the storage
pool and the number of days, if any, since the file was retrieved
by a client. The lesser of the two values is compared to the specified
MIGDELAY value. For example, if all the following conditions are true,
a file is not migrated:
- A file was in a storage pool for five days.
- The file was accessed by a client within the past three days.
- The value that is specified for the MIGDELAY parameter
is four days.
This parameter is optional. You can specify an integer from 0 to 9999.
If
you want the server to count the number of days that are based only
on when a file was stored and not when it was retrieved, use the NORETRIEVEDATE server option.
- MIGContinue
- Specifies whether you allow the server to migrate files that do
not satisfy the migration delay time. This parameter is optional.
Because you can require that files remain in the storage
pool for a minimum number of days, the server may migrate all eligible
files to the next storage pool yet not meet the low migration threshold.
This parameter allows you to specify whether the server is allowed
to continue the migration process by migrating files that do not satisfy
the migration delay time.
Possible values are:
- Yes
- Specifies that, when necessary to meet the low migration threshold,
the server continues to migrate files that do not satisfy the migration
delay time.
If you allow more than one migration process for the
storage pool, some files that do not satisfy the migration delay time
may be migrated unnecessarily. As one process migrates files that
satisfy the migration delay time, a second process could begin migrating
files that do not satisfy the migration delay time to meet the low
migration threshold. The first process that is still migrating files
that satisfy the migration delay time might have, by itself, caused
the low migration threshold to be met.
- No
- Specifies that the server stops migration when no eligible files
remain to be migrated, even before reaching the low migration threshold.
The server does not migrate files unless the files satisfy the migration
delay time.
- AUTOCopy
- Specifies when Tivoli® Storage
Manager writes data
simultaneously to copy storage pools and active-data pools. This parameter
affects the following operations:
- Client store sessions
- Server import processes
- Server data-migration processes
If an error occurs while
data is being simultaneously written to a copy storage pool or active-data
pool during a migration process, the server stops writing to the failing
storage pools for the remainder of the process. However, the server
continues to store files into the primary storage pool and any remaining
copy storage pools or active-data pools. These pools remain active
for the duration of the migration process. Copy storage pools are
specified using the COPYSTGPOOLS parameter. Active-data
pools are specified using the ACTIVEDATAPOOLS parameter.
Possible values are:
- None
- Specifies that the simultaneous-write function is disabled.
- CLient
- Specifies that data is written simultaneously to copy storage
pools and active-data pools during client store sessions or server
import processes. During server import processes, data is written
simultaneously to only copy storage pools. Data is not written to
active-data pools during server import processes.
- MIGRation
- Specifies that data is written simultaneously to copy storage
pools and active-data pools only during migration to this storage
pool. During server data-migration processes, data is written simultaneously
to copy storage pools and active-data pools only if the data does
not exist in those pools. Nodes whose data is being migrated must
be in a domain associated with an active-data pool. If the nodes are
not in a domain associated with an active pool, the data cannot be
written to the pool.
- All
- Specifies that data is written simultaneously to copy storage
pools and active-data pools during client store sessions, server import
processes, or server data-migration processes. Specifying this value
ensures that data is written simultaneously whenever this pool is
a target for any of the eligible operations.
- COPYSTGpools
- Specifies the names of copy storage pools where the server writes
data simultaneously. You can specify a maximum of three copy pool names
that are separated by commas. Spaces between the names of the copy
pools are not permitted. To add or remove one or more copy storage
pools, specify the pool name or names that you want to include in
the updated list. For example, if the existing copy pool list includes
COPY1 and COPY2 and you want to add COPY3, specify COPYSTGPOOLS=COPY1,COPY2,COPY3.
To remove all existing copy storage pools that are associated with
the primary storage pool, specify a null string ("") for the value
(for example, COPYSTGPOOLS="").
When specifying a value for the
COPYSTGPOOLS parameter, you can also specify a value for the COPYCONTINUE
parameter. For more information, see the COPYCONTINUE parameter.
The
combined total number of storage pools that are specified in the COPYSGTPOOLS
and ACTIVEDATAPOOLS parameters cannot exceed three.
When
a data storage operation switches from a primary storage pool to a
next storage pool, the next storage pool inherits the list of copy
storage pools and the COPYCONTINUE value from the primary storage pool.
The primary storage pool is specified by the copy group of the management
class that is bound to the data.
The
server can write data simultaneously to copy storage pools for the
following operations:
- Backup and archive operations by Tivoli Storage
Manager backup-archive
clients or application clients using the Tivoli Storage
Manager API
- Migration operations by Tivoli Storage
Manager for Space Management clients
- Import operations that involve copying exported file data from
external media to a primary storage pool associated with a copy storage
pool list
Restrictions: The
simultaneous-write function is not supported for the following store
operations:
- When the operation is using LAN-free data movement. Simultaneous-write
operations take precedence over LAN-free operations, causing the operations
to go over the LAN. However, the simultaneous-write configuration
is accepted.
- NAS backup operations. If the primary storage pool specified in
the DESTINATION or TOCDESTINATION in the copy group of the management
class has copy storage pools defined, the copy storage pools are ignored
and the data is stored into the primary storage pool only.
Attention: The function that is provided by the COPYSTGPOOLS parameter
is not intended to replace the BACKUP STGPOOL command.
If you use the COPYSTGPOOLS parameter, continue
to use the BACKUP STGPOOL command to ensure that
the copy storage pools are complete copies of the primary storage
pool. There are cases when a copy might not be created. For more information,
see the COPYCONTINUE parameter description.
- COPYContinue
- Specifies how the server reacts
to a copy storage pool write failure for any of the copy storage pools
that are listed in the COPYSTGPOOLS parameter.
This parameter is optional. When you specify the COPYCONTINUE parameter,
either a COPYSTGPOOLS list must exist or the COPYSTGPOOLS parameter
must also be specified.
Possible values are:
- Yes
- If the COPYCONTINUE parameter is set to YES,
the server will stop writing to the failing copy pools for the remainder
of the session, but continue storing files into the primary pool and
any remaining copy pools. The copy storage pool list is active only
for the life of the client session and applies to all the primary
storage pools in a particular storage pool hierarchy.
- No
- If the COPYCONTINUE parameter is set to NO,
the server will fail the current transaction and discontinue the store
operation.
Restrictions: - The setting of the COPYCONTINUE parameter
does not affect active-data pools. If a write failure occurs for any
of the active-data pools, the server stops writing to the failing
active-data pool for the remainder of the session, but continues storing
files into the primary pool and any remaining active-data pools and
copy storage pools. The active-data pool list is active only for the
life of the session and applies to all the primary storage pools in
a particular storage pool hierarchy.
- The setting of the COPYCONTINUE parameter
does not affect the simultaneous-write function during server import.
If data is being written simultaneously and a write failure occurs
to the primary storage pool or any copy storage pool, the server import
process fails.
- The setting of the COPYCONTINUE parameter
does not affect the simultaneous-write function during server data
migration. If data is being written simultaneously and a write failure
occurs to any copy storage pool or active-data pool, the failing storage
pool is removed and the data migration process continues. Write failures
to the primary storage pool cause the migration process to fail.
- ACTIVEDATApools
- Specifies the names of
active-data pools where the server writes data simultaneously during
a client backup operation. The ACTIVEDATAPOOLS parameter is optional.
Spaces between the names of the active-data pools are not permitted.
The
combined total number of storage pools that are specified in the COPYSGTPOOLS and ACTIVEDATAPOOLS parameters
cannot exceed three.
When a data storage operation switches
from a primary storage pool to a next storage pool, the next storage
pool inherits the list of active-data pools from the destination storage pool specified
in the copy group. The primary storage pool is specified by the copy
group of the management class that is bound to the data.
The
server can write data simultaneously to active-data pools only during
backup operations by
Tivoli Storage
Manager backup-archive
clients or application clients that use the
Tivoli Storage
Manager API.
Restrictions: - This parameter is available only to primary storage pools that
use NATIVE or NONBLOCK data format. This parameter is not available
for storage pools that use the following data formats:
- NETAPPDUMP
- CELERRADUMP
- NDMPDUMP
- Writing data simultaneously to active-data pools is not supported
when the operation is using LAN-free data movement. Simultaneous-write
operations take precedence over LAN-free operations, causing the operations
to go over the LAN. However, the simultaneous-write configuration
is accepted.
- Simultaneous-write operations are not supported when a NAS backup
operation is writing a TOC file. If the primary storage pool specified
in the TOCDESTINATION in the copy group of the management class has
active-data pools defined, the active-data pools are ignored and the
data is stored into the primary storage pool only.
- You cannot use the simultaneous-write function with Centera storage
devices.
- Data being imported will not be stored in active-data pools. After
an import operation, use the COPY ACTIVEDATA command
to store the imported data in an active-data pool.
Attention: The function that is provided by
the ACTIVEDATAPOOLS parameter is not intended
to replace the COPY ACTIVEDATA command. If you
use the ACTIVEDATAPOOLS parameter, use the COPY
ACTIVEDATA command to ensure that the active-data pools
contain all active data of the primary storage pool.
- SHRED
- Specifies
whether data is physically overwritten when it is deleted. This parameter
is optional. You can specify an integer from 0 to 10.
If you specify
a value of 0, the Tivoli Storage
Manager server deletes
the data from the database. However, the storage used to contain the
data will not be overwritten, and the data will still exist in storage
until that storage is reused for other data. It might be possible
to discover and reconstruct the data after it has been deleted. Changing
the value (for example, resetting it to 0) will not affect data that
was deleted and is waiting to be overwritten.
If you specify
a value greater than 0, the Tivoli Storage
Manager server deletes
the data both logically and physically. The server overwrites the storage used
to contain the data the specified number of times. This prevents any
attempts to discover and reconstruct the data after it has been deleted.
To
ensure that all copies of the data are shredded, specify a SHRED value
greater than 0 for the storage pool specified in the NEXTSTGPOOL parameter,
and do not specify either the COPYSTGPOOLS or ACTIVEDATAPOOLS. Specifying
relatively high values for the overwrite count will generally improve
the level of security, but can affect performance adversely.
Overwriting
of deleted data is performed asynchronously after the delete operation
is complete. Therefore, the space occupied by the deleted data remains
occupied for some period of time and will not be available as free
space for new data.
A SHRED value greater than zero cannot be
used if the value of the CACHE parameter is YES. If you want to enable
shredding for an existing storage pool for which caching is already
enabled, you must change the value of the CACHE parameter to NO.
Existing cached files remain in storage so that subsequent retrieval
requests can be satisfied quickly. If space is needed to store new
data, the existing cached files are erased so that the space they
occupied can be used for the new data. The existing cached files will
not be shredded when they are erased.
Important: After
an export operation finishes identifying files for export, any changes to the storage pool SHRED value is
ignored. An export operation that is suspended retains the original
SHRED value throughout the operation. You might want to consider canceling
your export operation if changes to the storage pool SHRED value jeopardize
the operation. You can reissue the export command after any needed
cleanup.