DB2 10.5 for Linux, UNIX, and Windows

LOAD command using the ADMIN_CMD procedure

Efficiently loads large amounts of data into a DB2® table. The LOAD command loads data at the page level, bypasses trigger firing and logging, and delays constraint checking and index building until after the data is loaded into the DB2 table.

Data stored on the server can be in the form of a file, tape, or named pipe. Data can also be loaded from a cursor defined from a query running against the currently connected database, a different database, or by using a user-written script or application. If the COMPRESS attribute for the table is set to YES, the data loaded is subject to compression on every data and database partition for which a dictionary exists in the table, including data in the XML storage object of the table.

Quick link to File type modifiers for the load utility.

Restrictions

The load utility does not support loading data at the hierarchy level. The load utility is not compatible with range-clustered tables. The load utility does not support the NOT LOGGED INITIALLY parameter for the CREATE TABLE or ALTER TABLE statements.

Scope

This command can be issued against multiple database partitions in a single request.

Authorization

One or more of the following authorities:
  • DATAACCESS
  • LOAD authority on the database and the following privileges:
    • INSERT privilege on the table when the load utility is invoked in INSERT mode, TERMINATE mode (to terminate a previous load insert operation), or RESTART mode (to restart a previous load insert operation)
    • INSERT and DELETE privilege on the table when the load utility is invoked in REPLACE mode, TERMINATE mode (to terminate a previous load replace operation), or RESTART mode (to restart a previous load replace operation)
    • INSERT privilege on the exception table, if such a table is used as part of the load operation.
  • To load data into a table that has protected columns, the session authorization ID must have LBAC credentials directly or indirectly through a group or a role that allow write access to all protected columns in the table. Otherwise the load fails and an error (SQLSTATE 5U014) is returned.
  • To load data into a table that has protected rows, the session authorization ID must hold a security label that meets these criteria:
    • The security label is part of the security policy protecting the table.
    • The security label was granted to the session authorization ID directly or indirectly through a group or a role for write access or for all access.
    If the session authorization ID does not hold such a security label, then the load fails and an error (SQLSTATE 5U014) is returned. The security label protects a loaded row if the session authorization ID LBAC credentials do not allow it to write to the security label that protects that row in the data. This does not happen, however, when the security policy protecting the table was created with the RESTRICT NOT AUTHORIZED WRITE SECURITY LABEL option of the CREATE SECURITY POLICY statement. In this case the load fails and an error (SQLSTATE 42519) is returned.

    When you load data into a table with protected rows, the target table has one column with a data type of DB2SECURITYLABEL. If the input row of data does not contain a value for that column, that row is rejected unless the usedefaults file type modifier is specified in the load command, in which case the security label you hold for write access from the security policy protecting the table is used. If you do not hold a security label for write access, the row is rejected and processing continues on to the next row

  • For a table with protected rows, if the REPLACE option is specified, the session authorization ID must have the authority to drop the table.
  • If the LOCK WITH FORCE option is specified, any of SYSMAINT, SYSCTRL, or SYSADM authority is required.
  • If the table has row access control activated, then LOAD REPLACE on that table would require the ability to drop the table. Specifically, you must have either CONTROL or DBADM on the table.

Since all load processes (and all DB2 server processes, in general) are owned by the instance owner, and all of these processes use the identification of the instance owner to access needed files, the instance owner must have read access to input data files. These input data files must be readable by the instance owner, regardless of who invokes the command.

Required connection

Database.

Instance. An explicit attachment is not required. If a connection to the database has been established, an implicit attachment to the local instance is attempted.

Command syntax

Read syntax diagramSkip visual syntax diagram
>>-LOAD--FROM--------------------------------------------------->

   .-,---------------------------------------------------.   
   V                                                     |   
>----+-filename----------------------------------------+-+------>
     +-pipename----------------------------------------+     
     +-device------------------------------------------+     
     +-(--query-statement--)---------------------------+     
     '-(--DATABASE--database-alias--query-statement--)-'     

>--OF--filetype--+-------------------------+-------------------->
                 |            .-,--------. |   
                 |            V          | |   
                 '-LOBS FROM----lob-path-+-'   

>--+------------------------+----------------------------------->
   |           .-,--------. |   
   |           V          | |   
   '-XML FROM----xml-path-+-'   

>--+--------------------------------+--------------------------->
   |              .---------------. |   
   |              V               | |   
   '-MODIFIED BY----file-type-mod-+-'   

>--+-------------------------------------------------------------------------------------------------+-->
   |                 .-,------------------.                                                          |   
   |                 V                    |                                                          |   
   '-METHOD--+-L--(----col-start--col-end-+--)--+------------------------------------------------+-+-'   
             |                                  |                     .-,-------------------.    | |     
             |                                  |                     V                     |    | |     
             |                                  '-NULL INDICATORS--(----null-indicator-list-+--)-' |     
             |       .-,--------.                                                                  |     
             |       V          |                                                                  |     
             +-N--(----col-name-+--)---------------------------------------------------------------+     
             |       .-,------------.                                                              |     
             |       V              |                                                              |     
             '-P--(----col-position-+--)-----------------------------------------------------------'     

>--+------------------------------------+----------------------->
   '-XMLPARSE--+-STRIP----+--WHITESPACE-'   
               '-PRESERVE-'                 

>--+--------------------------------------------------------------------------------------+-->
   '-XMLVALIDATE USING--+-XDS--+-----------------------+--| Ignore and Map parameters |-+-'   
                        |      '-DEFAULT--schema-sqlid-'                                |     
                        +-SCHEMA--schema-sqlid------------------------------------------+     
                        '-SCHEMALOCATION HINTS------------------------------------------'     

>--+--------------+--+-------------+--+-----------------+------->
   '-SAVECOUNT--n-'  '-ROWCOUNT--n-'  '-WARNINGCOUNT--n-'   

>--+--------------------+--+-------------------------------+---->
   '-MESSAGES ON SERVER-'  '-TEMPFILES PATH--temp-pathname-'   

>--+-INSERT---------------------------+------------------------->
   |          .-KEEPDICTIONARY------. |   
   +-REPLACE--+---------------------+-+   
   |          +-RESETDICTIONARY-----+ |   
   |          '-RESETDICTIONARYONLY-' |   
   +-RESTART--------------------------+   
   '-TERMINATE------------------------'   

>--INTO--table-name--+-------------------------+---------------->
                     |    .-,-------------.    |   
                     |    V               |    |   
                     '-(----insert-column-+--)-'   

>--+----------------------------------------------------------+-->
   |                            .-,-------------------------. |   
   |                            V  (1)   (2)                | |   
   '-FOR EXCEPTION--table-name--------------+-------------+-+-'   
                                            +-NORANGEEXC--+       
                                            '-NOUNIQUEEXC-'       

                   (3)          
   .-STATISTICS NO----------.   
>--+------------------------+----------------------------------->
   '-STATISTICS USE PROFILE-'   

>--+-----------------------------------------------------------------+-->
   |       .-NO----------------------------------------------------. |   
   +-COPY--+-YES--+-USE TSM--+--------------------------+--------+-+-+   
   |              |          '-OPEN--num-sess--SESSIONS-'        |   |   
   |              |     .-,----------------.                     |   |   
   |              |     V                  |                     |   |   
   |              +-TO----device/directory-+---------------------+   |   
   |              '-LOAD--lib-name--+--------------------------+-'   |   
   |                                '-OPEN--num-sess--SESSIONS-'     |   
   '-NONRECOVERABLE--------------------------------------------------'   

   .-WITHOUT PROMPTING-.                                 
>--+-------------------+--+--------------------------+---------->
                          '-DATA BUFFER--buffer-size-'   

>--+--------------------------+--+--------------------+--------->
   '-SORT BUFFER--buffer-size-'  '-CPU_PARALLELISM--n-'   

>--+---------------------+-------------------------------------->
   '-DISK_PARALLELISM--n-'   

>--+--------------------------------+--------------------------->
   '-INDEXING MODE--+-AUTOSELECT--+-'   
                    +-REBUILD-----+     
                    +-INCREMENTAL-+     
                    '-DEFERRED----'     

   .-ALLOW NO ACCESS-----------------------------.   
>--+---------------------------------------------+-------------->
   '-ALLOW READ ACCESS--+----------------------+-'   
                        '-USE--tablespace-name-'     

>--+----------------------------+------------------------------->
   |                    .-YES-. |   
   '-FETCH_PARALLELISM--+-NO--+-'   

>--+----------------------------------------------+------------->
   '-SET INTEGRITY PENDING CASCADE--+-IMMEDIATE-+-'   
                                    '-DEFERRED--'     

>--+-----------------+------------------------------------------>
   '-LOCK WITH FORCE-'   

>--+-----------------------------------------------------------------------------------+-->
   '-SOURCEUSEREXIT--executable--| Redirect Input/Output parameters |--+-------------+-'   
                                                                       '-PARALLELIZE-'     

>--+------------------------------------------------------+----><
   |                            .-----------------------. |   
   | .-PARTITIONED DB CONFIG-.  V                       | |   
   '-+-----------------------+----partitioned-db-option-+-'   

Ignore and Map parameters

|--+--------------------------------+--------------------------->
   |            .-,------------.    |   
   |            V              |    |   
   '-IGNORE--(----schema-sqlid-+--)-'   

>--+----------------------------------------------------+-------|
   |         .-,-----------------------------------.    |   
   |         V                                     |    |   
   '-MAP--(----(--schema-sqlid--,--schema-sqlid--)-+--)-'   

Redirect Input/Output parameters

|--+-------------------------------------------------------------------------------------+--|
   '-REDIRECT--+-INPUT FROM--+-BUFFER--input-buffer-+--+-----------------------------+-+-'   
               |             '-FILE--input-file-----'  '-OUTPUT TO FILE--output-file-' |     
               '-OUTPUT TO FILE--output-file-------------------------------------------'     

Notes:
  1. These keywords can appear in any order.
  2. Each of these keywords can only appear once.
  3. For column-organized tables, the default is the STATISTICS USE PROFILE parameter.

Command parameters

FROM filename | pipename | device(query-statement) | (DATABASE database-alias query-statement)
Specifies the file, pipe or device referring to an SQL statement that contains the data being loaded, or the SQL statement itself and the optional source database to load from cursor.
The query-statement option is used to LOAD from a cursor. It contains only one query statement, which is enclosed in parentheses, and can start with VALUES, SELECT or WITH. For example,
LOAD FROM (SELECT * FROM T1) OF CURSOR INSERT INTO T2

When the DATABASE database-alias clause is included before the query statement in the parentheses, the LOAD command will attempt to load the data using the query-statement from the given database as indicated by the database-alias name. Note that the LOAD will be executed using the user ID and password explicitly provided for the currently connected database (an implicit connection will cause the LOAD to fail).

If the input source is a file, pipe, or device, it must be accessible from the coordinator partition on the server.

If several names are specified, they will be processed in sequence. If the last item specified is a tape device and the user is prompted for a tape, the LOAD will fail and the ADMIN_CMD procedure will return an error.

Note:
  • A fully qualified path file name must be used and must exist on the server.
  • If data is exported into a file using the EXPORT command using the ADMIN_CMD procedure, the data file is owned by the fenced user ID. This file is not usually accessible by the instance owner. To run the LOAD from CLP or the ADMIN_CMD procedure, the data file must be accessible by the instance owner ID, so read access to the data file must be granted to the instance owner.
  • Loading data from multiple IXF files is supported if the files are physically separate, but logically one file. It is not supported if the files are both logically and physically separate. Note that if more than one logically and physically separate files are specified, then any file after the first one is ignored. (Multiple physical files would be considered logically one if they were all created with one invocation of the EXPORT command.)
  • When loading XML data from files into tables in a partitioned database environment, the XML data files must be read-accessible to all the database partitions where loading is taking place.
OF filetype
Specifies the format of the data:
  • ASC (non-delimited ASCII format)
  • DEL (delimited ASCII format)
  • IXF (Integration Exchange Format, PC version) is a binary format that is used exclusively by DB2 databases.
  • CURSOR (a cursor declared against a SELECT or VALUES statement).
Note:
  • When using a CURSOR file type to load XML data into a table in a distributed database environment, the PARTITION_ONLY and LOAD_ONLY modes are not supported.
  • When performing a load using the CURSOR file type where the DATABASE keyword was specified during the DECLARE CURSOR statement, load internally spawns a separate application to fetch the data; whereas when the DATABASE keyword is not specified, load fetches data within the same application. This difference between the two cases can also cause locking behavior difference. In particular, if you currently specify the DATABASE keyword using the same database as the currently connected database (and same userid and password as the current connection, if specified), there might be cases where you get into a lock issue (such as a lock wait or lock timeout, depending on the database configuration) which can be worked around by omitting the DATABASE keyword.
LOBS FROM lob-path
The path to the data files containing LOB values to be loaded. The path must end with a slash. The path must be fully qualified and accessible from the coordinator partition on the server . The names of the LOB data files are stored in the main data file (ASC, DEL, or IXF), in the column that will be loaded into the LOB column. The maximum number of paths that can be specified is 999. This will implicitly activate the LOBSINFILE behavior.

This option is ignored when specified in conjunction with the CURSOR file type.

MODIFIED BY file-type-mod
Specifies file type modifier options. See File type modifiers for the load utility.
METHOD
L
Specifies the start and end column numbers from which to load data. A column number is a byte offset from the beginning of a row of data. It is numbered starting from 1. This method can only be used with ASC files, and is the only valid method for that file type.
NULL INDICATORS null-indicator-list
This option can only be used when the METHOD L parameter is specified; that is, the input file is an ASC file). The null indicator list is a comma-separated list of positive integers specifying the column number of each null indicator field. The column number is the byte offset of the null indicator field from the beginning of a row of data. There must be one entry in the null indicator list for each data field defined in the METHOD L parameter. A column number of zero indicates that the corresponding data field always contains data.

A value of Y in the NULL indicator column specifies that the column data is NULL. Any character other than Y in the NULL indicator column specifies that the column data is not NULL, and that column data specified by the METHOD L option will be loaded.

The NULL indicator character can be changed using the MODIFIED BY option.

N
Specifies the names of the columns in the data file to be loaded. The case of these column names must match the case of the corresponding names in the system catalogs. Each table column that is not nullable should have a corresponding entry in the METHOD N list. For example, given data fields F1, F2, F3, F4, F5, and F6, and table columns C1 INT, C2 INT NOT NULL, C3 INT NOT NULL, and C4 INT, method N (F2, F1, F4, F3) is a valid request, while method N (F2, F1) is not valid. This method can only be used with file types IXF or CURSOR.
P
Specifies the field numbers (numbered from 1) of the input data fields to be loaded. Each table column that is not nullable should have a corresponding entry in the METHOD P list. For example, given data fields F1, F2, F3, F4, F5, and F6, and table columns C1 INT, C2 INT NOT NULL, C3 INT NOT NULL, and C4 INT, method P (2, 1, 4, 3) is a valid request, while method P (2, 1) is not valid. This method can only be used with file types IXF, DEL, or CURSOR, and is the only valid method for the DEL file type.
For each of the fields specified by method P, you need to define a corresponding column in the action statement, unless all columns are accounted for or the first x columns are going to be loaded, as shown in the following example:
  db2 import from datafile1.del of del method P(1, 3, 4)
      replace into table1 (c1, c3, c4)
XML FROM xml-path
Specifies one or more paths that contain the XML files. XDSs are contained in the main data file (ASC, DEL, or IXF), in the column that will be loaded into the XML column.
XMLPARSE
Specifies how XML documents are parsed. If this option is not specified, the parsing behavior for XML documents will be determined by the value of the CURRENT IMPLICIT XMLPARSE OPTION special register.
STRIP WHITESPACE
Specifies to remove whitespace when the XML document is parsed.
PRESERVE WHITESPACE
Specifies not to remove whitespace when the XML document is parsed.
XMLVALIDATE
Specifies that XML documents are validated against a schema, when applicable.
USING XDS
XML documents are validated against the XML schema identified by the XML Data Specifier (XDS) in the main data file. By default, if the XMLVALIDATE option is invoked with the USING XDS clause, the schema used to perform validation will be determined by the SCH attribute of the XDS. If an SCH attribute is not present in the XDS, no schema validation will occur unless a default schema is specified by the DEFAULT clause.

The DEFAULT, IGNORE, and MAP clauses can be used to modify the schema determination behavior. These three optional clauses apply directly to the specifications of the XDS, and not to each other. For example, if a schema is selected because it is specified by the DEFAULT clause, it will not be ignored if also specified by the IGNORE clause. Similarly, if a schema is selected because it is specified as the first part of a pair in the MAP clause, it will not be re-mapped if also specified in the second part of another MAP clause pair.

USING SCHEMA schema-sqlid
XML documents are validated against the XML schema with the specified SQL identifier. In this case, the SCH attribute of the XML Data Specifier (XDS) will be ignored for all XML columns.
USING SCHEMALOCATION HINTS
XML documents are validated against the schemas identified by XML schema location hints in the source XML documents. If a schemaLocation attribute is not found in the XML document, no validation will occur. When the USING SCHEMALOCATION HINTS clause is specified, the SCH attribute of the XML Data Specifier (XDS) will be ignored for all XML columns.
See examples of the XMLVALIDATE option in the following section.
IGNORE schema-sqlid
This option can only be used when the USING XDS parameter is specified. The IGNORE clause specifies a list of one or more schemas to ignore if they are identified by an SCH attribute. If an SCH attribute exists in the XML Data Specifier for a loaded XML document, and the schema identified by the SCH attribute is included in the list of schemas to ignore, then no schema validation will occur for the loaded XML document.
Note:

If a schema is specified in the IGNORE clause, it cannot also be present in the left side of a schema pair in the MAP clause.

The IGNORE clause applies only to the XDS. A schema that is mapped by the MAP clause will not be subsequently ignored if specified by the IGNORE clause.

DEFAULT schema-sqlid
This option can only be used when the USING XDS parameter is specified. The schema specified through the DEFAULT clause identifies a schema to use for validation when the XML Data Specifier (XDS) of a loaded XML document does not contain an SCH attribute identifying an XML Schema.

The DEFAULT clause takes precedence over the IGNORE and MAP clauses. If an XDS satisfies the DEFAULT clause, the IGNORE and MAP specifications will be ignored.

MAP schema-sqlid
This option can only be used when the USING XDS parameter is specified. Use the MAP clause to specify alternate schemas to use in place of those specified by the SCH attribute of an XML Data Specifier (XDS) for each loaded XML document. The MAP clause specifies a list of one or more schema pairs, where each pair represents a mapping of one schema to another. The first schema in the pair represents a schema that is referred to by an SCH attribute in an XDS. The second schema in the pair represents the schema that should be used to perform schema validation.

If a schema is present in the left side of a schema pair in the MAP clause, it cannot also be specified in the IGNORE clause.

Once a schema pair mapping is applied, the result is final. The mapping operation is non-transitive, and therefore the schema chosen will not be subsequently applied to another schema pair mapping.

A schema cannot be mapped more than once, meaning that it cannot appear on the left side of more than one pair.

SAVECOUNT n
Specifies that the load utility is to establish consistency points after every n rows. This value is converted to a page count, and rounded up to intervals of the extent size. Since a message is issued at each consistency point, this option should be selected if the load operation is monitored using LOAD QUERY. If the value of n is not sufficiently high, the synchronization of activities performed at each consistency point might impact performance.

The default value is zero, meaning that no consistency points are established, unless necessary.

This option is not allowed when specified in conjunction with the CURSOR file type or when loading a table containing an XML column.

The SAVECOUNT parameter is not supported for column-organized tables.

ROWCOUNT n
Specifies the number of n physical records in the file to be loaded. Allows a user to load only the first n rows in a file.
WARNINGCOUNT n
Stops the load operation after n warnings. Set this parameter if no warnings are expected, but verification that the correct file and table are being used is desired. If the load file or the target table is specified incorrectly, the load utility generates a warning for each row that it attempts to load, which causes the load to fail. If n is zero, or this option is not specified, the load operation continues regardless of the number of warnings issued.
If the load operation is stopped because the threshold of warnings is encountered, another load operation can be started in RESTART mode. The load operation automatically continues from the last consistency point. Alternatively, another load operation can be initiated in REPLACE mode, starting at the beginning of the input file.
In a partitioned database environment, a LOAD operation can have multiple load and partition agents. Each agent has a WARNINGCOUNT value. If the value of n is reached on a single agent, the LOAD operation fails. The n values are not cumulative. For example, if n is 3 and there are two agents, each with a WARNINGCOUNT of 2, the LOAD operation is successful.
MESSAGES ON SERVER
Specifies that the message file created on the server by the LOAD command is to be saved. The result set returned will include the following two columns: MSG_RETRIEVAL, which is the SQL statement required to retrieve all the warnings and error messages that occur during this operation, and MSG_REMOVAL, which is the SQL statement required to clean up the messages.

If this clause is not specified, the message file will be deleted when the ADMIN_CMD procedure returns to the caller. The MSG_RETRIEVAL and MSG_REMOVAL column in the result set will contain null values.

Note that with or without the clause, the fenced user ID or the instance owner's primary group must have the authority to create files under the directory indicated by the DB2_UTIL_MSGPATH registry variable.

TEMPFILES PATH temp-pathname
Specifies the name of the path to be used when creating temporary files during a load operation, and should be fully qualified according to the server database partition.
Temporary files take up file system space. Sometimes, this space requirement is quite substantial. The following list provides an estimate of how much file system space you should allocate for all temporary files:
  • 136 bytes for each message that the load utility generates.
  • 15 KB of overhead if the data file contains long field data or LOBs. This quantity can grow significantly if you specify the INSERT parameter and there is a large amount of long field or LOB data already in the table.
  • On the server, storage space that is equivalent to the raw size of the input data if the column compression dictionary is being built and the data source, such as a pipe or a socket, cannot be reopened.
INSERT
Adds the loaded data to the table without changing the existing table data.

A LOAD INSERT operation into a column-organized table maintains table statistics by default if the table was empty at the start of the load operation.

REPLACE
Deletes all data from the table, and inserts the new data. The table definition and index definitions are not changed. If you specify this parameter when moving data between hierarchies, you can replace only the data for an entire hierarchy, not individual subtables.

You cannot use this parameter to load data into system-period temporal tables.

A LOAD REPLACE operation into a column-organized table maintains table statistics by default.

KEEPDICTIONARY
An existing compression dictionary is preserved across the LOAD REPLACE operation. Provided the table COMPRESS attribute is YES, the newly replaced data is subject to being compressed using the dictionary that existed before the invocation of the load. If no dictionary previously existed in the table, a new dictionary is built using the data that is being replaced into the table as long as the table COMPRESS attribute is YES. The amount of data that is required to build the compression dictionary in this case is subject to the policies of ADC. This data is populated into the table as uncompressed. Once the dictionary is inserted into the table, the remaining data to be loaded is subject to being compressed with this dictionary. For a summary, see the following table.

This option is the default for row-organized tables.

The following example keeps the old dictionary if it is currently in the table:
CALL SYSPROC.ADMIN_CMD('load from staff.del of del replace
 keepdictionary into SAMPLE.STAFF statistics use profile
 data buffer 8')
Table 1. LOAD REPLACE KEEPDICTIONARY
Compress Table row data dictionary exists XML storage object dictionary exists1 Compression dictionary Data compression
YES YES YES Preserve table row data and XML dictionaries. Data to be loaded is subject to compression.
YES YES NO Preserve table row data dictionary and build a new XML dictionary. Table row data to be loaded is subject to compression. After XML dictionary is built, remaining XML data to be loaded is subject to compression.
YES NO YES Build table row data dictionary and preserve XML dictionary. After table row data dictionary is built, remaining table row data to be loaded is subject to compression. XML data to be loaded is subject to compression.
YES NO NO Build new table row data and XML dictionaries. After dictionaries are built, remaining data to be loaded is subject to compression.
NO YES YES Preserve table row data and XML dictionaries. Data to be loaded is not compressed.
NO YES NO Preserve table row data dictionary. Data to be loaded is not compressed.
NO NO YES No effect on table row dictionary. Preserve XML dictionary. Data to be loaded is not compressed.
NO NO NO No effect. Data to be loaded is not compressed.
Note:
  1. A compression dictionary can be created for the XML storage object of a table only if the XML columns are added to the table in DB2 Version 9.7 or later, or if the table is migrated using an online table move.
  2. If LOAD REPLACE KEEPDICTIONARY operation is interrupted, load utility can recover after either LOAD RESTART or LOAD TERMINATE is issued. Existing XML storage object dictionary may not be preserved after recovery from interrupted LOAD REPLACE KEEPDICTIONARY operation. A new XML storage object dictionary will be created if LOAD RESTART is used
RESETDICTIONARY
This directive instructs LOAD REPLACE processing to build a new dictionary for the table data object provided that the table COMPRESS attribute is YES. If the COMPRESS attribute is NO and a dictionary was already present in the table it will be removed and no new dictionary will be inserted into the table. A compression dictionary can be built with just one user record. If the loaded data set size is zero and if there is a preexisting dictionary, the dictionary will not be preserved. The amount of data required to build a dictionary with this directive is not subject to the policies of ADC. For a summary, see the following table.

This option is the default for column-organized tables.

The following example will reset the current dictionary and make a new one:
CALL SYSPROC.ADMIN_CMD('load from staff.del of del replace
 resetdictionary into SAMPLE.STAFF statistics use profile
 data buffer 8')
Table 2. LOAD REPLACE RESETDICTIONARY
Compress Table row data dictionary exists XML storage object dictionary exists1 Compression dictionary Data compression
YES YES YES Build new dictionaries2. If the DATA CAPTURE CHANGES option is enabled on the CREATE TABLE or ALTER TABLE statements, the current table row data dictionary is kept (and referred to as the historical compression dictionary). After dictionaries are built, remaining data to be loaded is subject to compression.
YES YES NO Build new dictionaries2. If the DATA CAPTURE CHANGES option is enabled on the CREATE TABLE or ALTER TABLE statements, the current table row data dictionary is kept (and referred to as the historical compression dictionary). After dictionaries are built, remaining data to be loaded is subject to compression.
YES NO YES Build new dictionaries. After dictionaries are built, remaining data to be loaded is subject to compression.
YES NO NO Build new dictionaries. After dictionaries are built, remaining data to be loaded is subject to compression.
NO YES YES Remove dictionaries. Data to be loaded is not compressed.
NO YES NO Remove table row data dictionary. Data to be loaded is not compressed.
NO NO YES Remove XML storage object dictionary. Data to be loaded is not compressed.
NO NO NO No effect. All table data is not compressed.
Notes:
  1. A compression dictionary can be created for the XML storage object of a table only if the XML columns are added to the table in DB2 Version 9.7 or later, or if the table is migrated using an online table move.
  2. If a dictionary exists and the compression attribute is enabled, but there are no records to load into the table partition, a new dictionary cannot be built and the RESETDICTIONARY operation will not keep the existing dictionary.
RESETDICTIONARYONLY
This option creates a column compression dictionary that is based on the input file, without loading any rows. You can use this option to create the compression dictionary before you ingest any data by using SQL-based utilities. This option is applicable to column-organized tables only.
TERMINATE
One of four modes under which the load utility can execute. Terminates a previously interrupted load operation, and rolls back the operation to the point in time at which it started, even if consistency points were passed. The states of any table spaces involved in the operation return to normal, and all table objects are made consistent (index objects might be marked as invalid, in which case index rebuild will automatically take place at next access). If the load operation being terminated is a LOAD REPLACE, the table will be truncated to an empty table after the LOAD TERMINATE operation. If the load operation being terminated is a LOAD INSERT, the table will retain all of its original records after the LOAD TERMINATE operation. For summary of dictionary management, see Table 3.

The LOAD TERMINATE option will not remove a backup pending state from table spaces.

RESTART
Restarts an interrupted load operation. The load operation automatically continues from the last consistency point in the load, build, or delete phase. For summary of dictionary management, see Table 4.

The RESTART parameter is not supported for column-organized tables. To recover a column-organized table after a failed load operation, use the TERMINATE or REPLACE parameter.

INTO table-name
Specifies the database table into which the data is to be loaded. This table cannot be a system table, a declared temporary table, or a created temporary table. An alias, or the fully qualified or unqualified table name can be specified. A qualified table name is in the form schema.tablename. If an unqualified table name is specified, the table will be qualified with the CURRENT SCHEMA.
If the database table contains implicitly hidden columns, you must specify whether data for the hidden columns is included in the load operation. Use one of the following methods to indicate if data for hidden columns is included:
  • Use insert-column to explicitly specify the columns into which data is to be inserted.
    db2 load from delfile1 of del
       insert into table1 (c1, c2, c3,...)
  • Use one of the hidden column file type modifiers: specify implicitlyhiddeninclude when the input file contains data for the hidden columns, or implicitlyhiddenmissing when the input file does not.
    db2 load from delfile1 of del modified by implicitlyhiddeninclude
       insert into table1 
  • Use the DB2_DMU_DEFAULT registry variable on the server-side to set the default behavior when data movement utilities encounter tables with implicitly hidden columns. Specify IMPLICITLYHIDDENINCLUDE when utilities assume that the implicitly hidden columns are included, or IMPLICITLYHIDDENMISSING when utilities assume that the implicitly hidden columns are not included.
    db2set DB2_DMU_DEFAULT=IMPLICITLYHIDDENINCLUDE
    db2 load from delfile1 of del insert into table1 
insert-column
Specifies the table column into which the data is to be inserted.
The load utility cannot parse columns whose names contain one or more spaces. For example,
CALL SYSPROC.ADMIN_CMD('load from delfile1 of del noheader 
   method P (1, 2, 3, 4, 5, 6, 7, 8, 9)
   insert into table1 (BLOB1, S2, I3, Int 4, I5, I6, DT7, I8, TM9)')
will fail because of the Int 4 column. The solution is to enclose such column names with double quotation marks:
CALL SYSPROC.ADMIN_CMD('load from delfile1 of del noheader 
   method P (1, 2, 3, 4, 5, 6, 7, 8, 9)
   insert into table1 (BLOB1, S2, I3, "Int 4", I5, I6, DT7, I8, TM9)')
FOR EXCEPTION table-name
Specifies the exception table into which rows in error are copied. Any row that violates a unique index or a primary key index is copied. If you specify an unqualified table name, the table name is qualified with the current schema. The table cannot be a column-organized table.

Information that is written to the exception table is not written to the dump file. In a partitioned database environment, an exception table must be defined for those database partitions on which the loading table is defined. The dump file, otherwise, contains rows that cannot be loaded because they are invalid or have syntax errors.

When loading XML data, using the FOR EXCEPTION clause to specify a load exception table is not supported in the following cases:
  • When using label-based access control (LBAC).
  • When loading data into a partitioned table.
NORANGEEXC
Indicates that if a row is rejected because of a range violation it will not be inserted into the exception table.
NOUNIQUEEXC
Indicates that if a row is rejected because it violates a unique constraint it will not be inserted into the exception table.
STATISTICS USE PROFILE
Instructs load to collect statistics during the load according to the profile defined for this table. This is the default for column-organized tables. The profile must be created before issuing the LOAD command. The profile is created by the RUNSTATS command. For row-organized tables, if the profile does not exist and this parameter is specified, a warning is returned and no statistics are collected. For column-organized tables, if the profile does not exist and this parameter is specified, the load utility uses the same default RUNSTATS command options as those that are used during an automatic runstats operation.

During load, distribution statistics are not collected for columns of type XML.

STATISTICS NO
Specifies that no statistics are to be collected, and that the statistics in the catalogs are not to be altered. This parameter is the default for row-organized tables.
COPY NO
Specifies that the table space in which the table resides will be placed in backup pending state if forward recovery is enabled (that is, if either logarchmeth1 or logarchmeth2 is set to a value other than OFF). The COPY NO option will also put the table space state into the Load in Progress table space state. This is a transient state that will disappear when the load completes or aborts. The data in any table in the table space cannot be updated or deleted until a table space backup or a full database backup is made. However, it is possible to access the data in any table by using the SELECT statement.

LOAD with COPY NO on a recoverable database leaves the table spaces in a backup pending state. For example, performing a LOAD with COPY NO and INDEXING MODE DEFERRED will leave indexes needing a refresh. Certain queries on the table might require an index scan and will not succeed until the indexes are refreshed. The index cannot be refreshed if it resides in a table space which is in the backup pending state. In that case, access to the table will not be allowed until a backup is taken. Index refresh is done automatically by the database when the index is accessed by a query. If one of COPY NO, COPY YES, or NONRECOVERABLE is not specified, and the database is recoverable (logarchmeth1 or logarchmeth2 is set to value other than OFF), then COPY NO is the default.

COPY YES
Saves a copy of the loaded data. This parameter is invalid if forward recovery is disabled.
USE TSM
Specifies that the copy will be stored using IBM® Tivoli® Storage Manager.
OPEN num-sess SESSIONS
The number of I/O sessions to be used with TSM or the vendor product. The default value is 1.
TO device/directory
Specifies the device or directory on which the copy image will be created.
LOAD lib-name
The name of the shared library (DLL on Windows operating systems) containing the vendor backup and restore I/O functions to be used. It can contain the full path. If the full path is not given, it will default to the path where the user exit programs reside.
NONRECOVERABLE
Specifies that the load transaction is to be marked as unrecoverable and that it will not be possible to recover it by a subsequent roll forward action. The roll forward utility will skip the transaction and will mark the table into which data was being loaded as "invalid". The utility will also ignore any subsequent transactions against that table. After the roll forward operation is completed, such a table can only be dropped or restored from a backup (full or table space) taken after a commit point following the completion of the nonrecoverable load operation.

With this option, table spaces are not put in backup pending state following the load operation, and a copy of the loaded data does not have to be made during the load operation. If one of COPY NO, COPY YES, or NONRECOVERABLE is not specified, and the database is not recoverable (logarchmeth1 and logarchmeth2 are both set to OFF), then NONRECOVERABLE is the default.

WITHOUT PROMPTING
Specifies that the list of data files contains all the files that are to be loaded, and that the devices or directories listed are sufficient for the entire load operation. If a continuation input file is not found, or the copy targets are filled before the load operation finishes, the load operation will fail, and the table will remain in load pending state.

This is the default. Any actions which normally require user intervention will return an error message.

DATA BUFFER buffer-size
Specifies the number of 4 KB pages (regardless of the degree of parallelism) to use as buffered space for transferring data within the utility. If the value specified is less than the algorithmic minimum, the minimum required resource is used, and no warning is returned.

This memory is allocated directly from the utility heap, whose size can be modified through the util_heap_sz database configuration parameter. Beginning in version 9.5, the value of the DATA BUFFER option of the LOAD command can temporarily exceed util_heap_sz if more memory is available in the system. In this situation, the utility heap is dynamically increased as needed until the database_memory limit is reached. This memory will be released once the load operation completes.

If a value is not specified, an intelligent default is calculated by the utility at run time. The default is based on a percentage of the free space available in the utility heap at the instantiation time of the loader, as well as some characteristics of the table.

SORT BUFFER buffer-size
This option specifies a value that overrides the sortheap database configuration parameter during a load operation. It is relevant only when loading tables with indexes and only when the INDEXING MODE parameter is not specified as DEFERRED. The value that is specified cannot exceed the value of sortheap. This parameter is useful for throttling the sort memory that is used when loading tables with many indexes without changing the value of sortheap, which would also affect general query processing.
CPU_PARALLELISM n
Specifies the number of processes or threads that the load utility will create for parsing, converting, and formatting records when building table objects. This parameter is designed to exploit the number of processes running per database partition. It is particularly useful when loading presorted data, because record order in the source data is preserved. If the value of this parameter is zero, or has not been specified, the load utility uses an intelligent default value (usually based on the number of CPUs available) at run time.
Note:
  1. If this parameter is used with tables containing either LOB or LONG VARCHAR fields, its value becomes one, regardless of the number of system CPUs or the value specified by the user.
  2. Specifying a small value for the SAVECOUNT parameter causes the loader to perform many more I/O operations to flush both data and table metadata. When CPU_PARALLELISM is greater than one, the flushing operations are asynchronous, permitting the loader to exploit the CPU. When CPU_PARALLELISM is set to one, the loader waits on I/O during consistency points. A load operation with CPU_PARALLELISM set to two, and SAVECOUNT set to 10 000, completes faster than the same operation with CPU_PARALLELISM set to one, even though there is only one CPU.
DISK_PARALLELISM n
Specifies the number of processes or threads that the load utility will create for writing data to the table space containers. If a value is not specified, the utility selects an intelligent default based on the number of table space containers and the characteristics of the table.
INDEXING MODE
Specifies whether the load utility is to rebuild indexes or to extend them incrementally. Valid values are:
AUTOSELECT
The load utility will automatically decide between REBUILD or INCREMENTAL mode. The decision is based on the amount of data being loaded and the depth of the index tree. Information relating to the depth of the index tree is stored in the index object. RUNSTATS is not required to populate this information. AUTOSELECT is the default indexing mode.
REBUILD
All indexes will be rebuilt. The utility must have sufficient resources to sort all index key parts for both old and appended table data.

If the LogIndexBuild database configuration parameter is turned on, the transaction log contains the image of each index page after it is created. If the LogIndexBuild database configuration parameter is turned off, only the allocation and initialization of each page is logged by the Index Manager (about 250 bytes per page approximately as opposed to the non-empty portion of each page).

INCREMENTAL
Indexes will be extended with new data. This approach consumes index free space. It only requires enough sort space to append index keys for the inserted records. This method is only supported in cases where the index object is valid and accessible at the start of a load operation (it is, for example, not valid immediately following a load operation in which the DEFERRED mode was specified). If this mode is specified, but not supported due to the state of the index, a warning is returned, and the load operation continues in REBUILD mode. Similarly, if a load restart operation is begun in the load build phase, INCREMENTAL mode is not supported.

If the LogIndexBuild database configuration parameter is turned on, the DB2 software generates the log records for the insertion of every key into the index as well as any page splits performed. If this parameter is turned off (which is common when not using HADR), the amount of index logging performed by the Index Manager depends on whether or not the ALLOW READ ACCESS option was specified. If the ALLOW READ ACCESS option is specified, the log record is generated including logs for page splits. If the ALLOW READ ACCESS option is not specified, no log record from the Index Manager is generated.

DEFERRED
The load utility will not attempt index creation if this mode is specified. Indexes will be marked as needing a refresh. The first access to such indexes that is unrelated to a load operation might force a rebuild, or indexes might be rebuilt when the database is restarted. This approach requires enough sort space for all key parts for the largest index. The total time subsequently taken for index construction is longer than that required in REBUILD mode. Therefore, when performing multiple load operations with deferred indexing, it is advisable (from a performance viewpoint) to let the last load operation in the sequence perform an index rebuild, rather than allow indexes to be rebuilt at first non-load access.

Deferred indexing is only supported for tables with non-unique indexes, so that duplicate keys inserted during the load phase are not persistent after the load operation.

ALLOW NO ACCESS
Load will lock the target table for exclusive access during the load. The table state will be set to Load In Progress during the load. ALLOW NO ACCESS is the default behavior. It is the only valid option for LOAD REPLACE.

When there are constraints on the table, the table state will be set to Set Integrity Pending as well as Load In Progress. The SET INTEGRITY statement must be used to take the table out of Set Integrity Pending state.

ALLOW READ ACCESS
Load will lock the target table in a share mode. The table state will be set to both Load In Progress and Read Access. Readers can access the non-delta portion of the data while the table is being load. In other words, data that existed before the start of the load will be accessible by readers to the table, data that is being loaded is not available until the load is complete.
Important: Starting with Version 10.1 Fix Pack 1, the ALLOW READ ACCESS parameter is deprecated and might be removed in a future release. For more details, see ALLOW READ ACCESS parameter in the LOAD command is deprecated.

The ALLOW READ ACCESS parameter is not supported for column-organized tables.

LOAD TERMINATE or LOAD RESTART of an ALLOW READ ACCESS load can use this parameter; LOAD TERMINATE or LOAD RESTART of an ALLOW NO ACCESS load cannot use this parameter. Furthermore, this option is not valid if the indexes on the target table are marked as requiring a rebuild.

When there are constraints on the table, the table state will be set to Set Integrity Pending as well as Load In Progress, and Read Access. At the end of the load, the table state Load In Progress will be removed but the table states Set Integrity Pending and Read Access will remain. The SET INTEGRITY statement must be used to take the table out of Set Integrity Pending. While the table is in Set Integrity Pending and Read Access states, the non-delta portion of the data is still accessible to readers, the new (delta) portion of the data will remain inaccessible until the SET INTEGRITY statement has completed. A user can perform multiple loads on the same table without issuing a SET INTEGRITY statement. Only the original (checked) data will remain visible, however, until the SET INTEGRITY statement is issued.

ALLOW READ ACCESS also supports the following modifiers:
USE tablespace-name
If the indexes are being rebuilt, a shadow copy of the index is built in table space tablespace-name and copied over to the original table space at the end of the load during an INDEX COPY PHASE. Only system temporary table spaces can be used with this option. If not specified then the shadow index will be created in the same table space as the index object. If the shadow copy is created in the same table space as the index object, the copy of the shadow index object over the old index object is instantaneous. If the shadow copy is in a different table space from the index object a physical copy is performed. This could involve considerable I/O and time. The copy happens while the table is offline at the end of a load during the INDEX COPY PHASE.

Without this option the shadow index is built in the same table space as the original. Since both the original index and shadow index by default reside in the same table space simultaneously, there might be insufficient space to hold both indexes within one table space. Using this option ensures that you retain enough table space for the indexes.

This option is ignored if the user does not specify INDEXING MODE REBUILD or INDEXING MODE AUTOSELECT. This option will also be ignored if INDEXING MODE AUTOSELECT is chosen and load chooses to incrementally update the index.

FETCH_PARALLELISM YES | NO
When you perform a load from a cursor where the cursor is declared using the DATABASE keyword, or when using the API sqlu_remotefetch_entry media entry, and this option is set to YES, the load utility attempts to parallelize fetching from the remote data source if possible. Currently, you can parallelize data fetching only if the cursor's select-statement is of the simple form "SELECT * FROM <tablename>". If set to NO, no parallel fetching is performed. The default value is YES. For more information, see "Moving data using the CURSOR file type".
SET INTEGRITY PENDING CASCADE
If LOAD puts the table into Set Integrity Pending state, the SET INTEGRITY PENDING CASCADE option allows the user to specify whether or not Set Integrity Pending state of the loaded table is immediately cascaded to all descendents (including descendent foreign key tables, descendent immediate materialized query tables and descendent immediate staging tables).
IMMEDIATE
Indicates that Set Integrity Pending state is immediately extended to all descendent foreign key tables, descendent immediate materialized query tables and descendent staging tables. For a LOAD INSERT operation, Set Integrity Pending state is not extended to descendent foreign key tables even if the IMMEDIATE option is specified.

When the loaded table is later checked for constraint violations (using the IMMEDIATE CHECKED option of the SET INTEGRITY statement), descendent foreign key tables that were placed in Set Integrity Pending Read Access state will be put into Set Integrity Pending No Access state.

DEFERRED
Indicates that only the loaded table will be placed in the Set Integrity Pending state. The states of the descendent foreign key tables, descendent immediate materialized query tables and descendent immediate staging tables will remain unchanged.

Descendent foreign key tables might later be implicitly placed in Set Integrity Pending state when their parent tables are checked for constraint violations (using the IMMEDIATE CHECKED option of the SET INTEGRITY statement). Descendent immediate materialized query tables and descendent immediate staging tables will be implicitly placed in Set Integrity Pending state when one of its underlying tables is checked for integrity violations. A query of a table that is in the Set Integrity Pending state might succeed if an eligible materialized query table that is not in the Set Integrity Pending state is accessed by the query instead of the specified table. A warning (SQLSTATE 01586) will be issued to indicate that descendent tables have been placed in Set Integrity Pending state. See the Notes section of the SET INTEGRITY statement in the SQL Reference for when these descendent tables will be put into Set Integrity Pending state.

If the SET INTEGRITY PENDING CASCADE option is not specified:

  • Only the loaded table will be placed in Set Integrity Pending state. The state of descendent foreign key tables, descendent immediate materialized query tables and descendent immediate staging tables will remain unchanged, and can later be implicitly put into Set Integrity Pending state when the loaded table is checked for constraint violations.

If LOAD does not put the target table into Set Integrity Pending state, the SET INTEGRITY PENDING CASCADE option is ignored.

LOCK WITH FORCE
The utility acquires various locks including table locks in the process of loading. Rather than wait, and possibly timeout, when acquiring a lock, this option allows load to force off other applications that hold conflicting locks on the target table. Applications holding conflicting locks on the system catalog tables will not be forced off by the load utility. Forced applications will roll back and release the locks the load utility needs. The load utility can then proceed. This option requires the same authority as the FORCE APPLICATIONS command (SYSADM, SYSCTRL, or SYSMAINT).

ALLOW NO ACCESS loads might force applications holding conflicting locks at the start of the load operation. At the start of the load the utility can force applications that are attempting to either query or modify the table.

ALLOW READ ACCESS loads can force applications holding conflicting locks at the start or end of the load operation. At the start of the load the load utility can force applications that are attempting to modify the table. At the end of the load operation, the load utility can force applications that are attempting to either query or modify the table.

SOURCEUSEREXIT executable
Specifies the name of an executable file that is called to feed data into the utility.

The SOURCEUSEREXIT parameter is not supported for column-organized tables.

REDIRECT
INPUT FROM
BUFFER input-buffer
The stream of bytes specified in input-buffer is passed into the STDIN file descriptor of the process executing the given executable.
FILE input-file
The contents of this client-side file are passed into the STDIN file descriptor of the process executing the given executable.
OUTPUT TO
FILE output-file
The STDOUT and STDERR file descriptors are captured to the fully qualified server-side file specified.
PARALLELIZE
Increases the throughput of data coming into the load utility by invoking multiple user exit processes simultaneously. This option is only applicable in multi-partition database environments and is ignored in single-partition database environments.
For more information, see "Moving data using a customized application (user exit)".
PARTITIONED DB CONFIG partitioned-db-option
Allows you to execute a load into a table distributed across multiple database partitions. The PARTITIONED DB CONFIG parameter allows you to specify partitioned database-specific configuration options. The partitioned-db-option values can be any of the following options:
PART_FILE_LOCATION x
OUTPUT_DBPARTNUMS x
PARTITIONING_DBPARTNUMS x
MODE x
MAX_NUM_PART_AGENTS x
ISOLATE_PART_ERRS x
STATUS_INTERVAL x
PORT_RANGE x
CHECK_TRUNCATION
MAP_FILE_INPUT x
MAP_FILE_OUTPUT x
TRACE x
NEWLINE
DISTFILE x
OMIT_HEADER
RUN_STAT_DBPARTNUM x
Detailed descriptions of these options are provided in "Load configuration options for partitioned database environments".
RESTARTCOUNT
Deprecated.
USING directory
Deprecated.

Example

Issue a load with replace option for the employee table data from a file.
CALL SYSPROC.ADMIN_CMD('LOAD FROM /home/theresax/tmp/emp_exp.dat 
   OF DEL METHOD P (1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14) 
   MESSAGES /home/theresax/tmp/emp_load.msg 
   REPLACE INTO THERESAX.EMPLOYEE (EMPNO, FIRSTNME, MIDINIT, LASTNAME, 
   WORKDEPT, PHONENO, HIREDATE, JOB, EDLEVEL, SEX, BIRTHDATE, SALARY, 
   BONUS, COMM) COPY NO INDEXING MODE AUTOSELECT ISOLATE_PART_ERRS 
   LOAD_ERRS_ONLY MODE PARTITION_AND_LOAD' )
The following section is an example of output from a single-partition database.
Result set 1
--------------

ROWS_READ     ROWS_SKIPPED     ROWS_LOADED     ROWS_REJECTED     ...  
---------...- ------------...- -----------...- -------------...- ...  
           32                0              32                 0 ...  

  1 record(s) selected.

Return Status = 0
Output from a single-partition database (continued).
... ROWS_DELETED     ROWS_COMMITTED     MSG_RETRIEVAL	                    
... ------------...- --------------...- ------------------------------...-
...                0                 32 SELECT SQLCODE, MSG_TEXT FROM     
...                                       TABLE(SYSPROC.ADMIN_GET_MSGS(   
...                                       '2203498_thx')) AS MSG          
Output from a single-partition database (continued).
... MSG_REMOVAL                                      
... --------------------------------------------...- 
... CALL SYSPROC.ADMIN_REMOVE_MSGS('2203498_thx')    
...                                                  
Note: The following columns are also returned in this result set, but are set to NULL because they are only populated when loading into a multi-partition database: ROWS_PARTITIONED and NUM_AGENTINFO_ENTRIES.
The following section is an example of output from a multi-partition database.
Result set 1
--------------

ROWS_READ     ROWS_REJECTED     ROWS_PARTITIONED     NUM_AGENTINFO_ENTRIES ...
---------...- -------------...- ----------------...- --------------------- ...
           32                 0                   32                     5 ...
                                                                           ...
                                                                           ...
                                                                              

  1 record(s) selected.
Output from a multi-partition database (continued).
... MSG_RETRIEVAL                          MSG_REMOVAL
... ----------------------------------...- -----------------------------...- 
... SELECT DBPARTITIONNUM, AGENT_TYPE,     CALL SYSPROC.ADMIN_REMOVE_MSGS
...   SQLCODE, MSG_TEXT FROM TABLE             ('2203498_thx')               
...   (SYSPROC.ADMIN_GET_MSGS 
...   ('2203498_thx')) AS MSG 
Note: The following columns are also returned in this result set, but are set to NULL because they are only populated when loading into a single-partition database: ROWS_SKIPPED, ROWS_LOADED, ROWS_DELETED and ROWS_COMMITTED.
Output from a multi-partition database (continued).
Result set 2
--------------

DBPARTITIONNUM     SQLCODE     TABSTATE     AGENTTYPE        
--------------...- -------...- --------...- ------------...- 
                10           0 NORMAL       LOAD             
                20           0 NORMAL       LOAD             
                30           0 NORMAL       LOAD             
                20           0 NORMAL       PARTITION        
                10           0 NORMAL       PRE_PARTITION    

1 record(s) selected.

Return Status = 0

Example : Loading XML data

The user has constructed a data file with XDS fields to describe the documents that are to be inserted into the table. It might appear like this :

1, "<XDS FIL=""file1.xml"" />"
2, "<XDS FIL='file2.xml' OFF='23' LEN='45' />"

For the first row, the XML document is identified by the file named file1.xml. Note that since the character delimiter is the double quote character, and double quotation marks exist inside the XDS, the double quotation marks contained within the XDS are doubled. For the second row, the XML document is identified by the file named file2.xml, and starts at byte offset 23, and is 45 bytes in length.

The user issues a load command without any parsing or validation options for the XML column, and the data is loaded successfully:
LOAD
FROM data.del of DEL INSERT INTO mytable

Example : Loading XML data from CURSOR

Loading data from cursor is the same as with a regular relational column type. The user has two tables, T1 and T2, each of which consist of a single XML column named C1. To LOAD from T1 into T2, the user will first declare a cursor:

DECLARE
X1 CURSOR FOR SELECT C1 FROM T1;

Next, the user may issue a LOAD using the cursor type:

LOAD FROM X1 of
CURSOR INSERT INTO T2

Applying the XML specific LOAD options to the cursor type is the same as loading from a file.

Usage notes

Summary of LOAD TERMINATE and LOAD RESTART dictionary management

The following chart summarizes the compression dictionary management behavior for LOAD processing under the TERMINATE directive.

Table 3. LOAD TERMINATE dictionary management
Table COMPRESS attribute Does table row data dictionary existed before LOAD? XML storage object dictionary exists before LOAD1 TERMINATE: LOAD REPLACE KEEPDICTIONARY or LOAD INSERT TERMINATE: LOAD REPLACE RESETDICTIONARY
YES YES YES Keep existing dictionaries. Neither dictionary is kept. 2
YES YES NO Keep existing dictionary. Nothing is kept. 2
YES NO YES Keep existing dictionary. Nothing is kept.
YES NO NO Nothing is kept. Nothing is kept.
NO YES YES Keep existing dictionaries. Nothing is kept.
NO YES NO Keep existing dictionary. Nothing is kept.
NO NO YES Keep existing dictionary. Nothing is kept.
NO NO NO Do nothing. Do nothing.
Note:
  1. A compression dictionary can be created for the XML storage object of a table only if the XML columns are added to the table in DB2 Version 9.7 or later, or if the table is migrated using an online table move.
  2. In the special case that the table has data capture enabled, the table row data dictionary is kept.

LOAD RESTART truncates a table up to the last consistency point reached. As part of LOAD RESTART processing, a compression dictionary will exist in the table if it was present in the table at the time the last LOAD consistency point was taken. In that case, LOAD RESTART will not create a new dictionary. For a summary of the possible conditions, see Table 4.

Table 4. LOAD RESTART dictionary management
Table COMPRESS Attribute Table row data dictionary exist before LOAD consistency point?1 XML Storage object dictionary existed before last LOAD?2 RESTART: LOAD REPLACE KEEPDICTIONARY or LOAD INSERT RESTART: LOAD REPLACE RESETDICTIONARY
YES YES YES Keep existing dictionaries. Keep existing dictionaries.
YES YES NO Keep existing table row data dictionary and build XML dictionary subject to ADC. Keep existing table row data dictionary and build XML dictionary.
YES NO YES Build table row data dictionary subject to ADC. Keep existing XML dictionary. Build table row data dictionary. Keep existing XML dictionary.
YES NO NO Build table row data and XML dictionaries subject to ADC. Build table row data and XML dictionaries.
NO YES YES Keep existing dictionaries. Remove existing dictionaries.
NO YES NO Keep existing table row data dictionary. Remove existing table row data dictionary.
NO NO YES Keep existing XML dictionary. Remove existing XML dictionary.
NO NO NO Do nothing. Do nothing.
Notes:
  1. The SAVECOUNT option is not allowed when loading XML data, load operations that fail during the load phase restart from the beginning of the operation.
  2. A compression dictionary can be created for the XML storage object of a table only if the XML columns are added to the table in DB2 Version 9.7 or later, or if the table is migrated using an online table move.

Result set information

Command execution status is returned in the SQLCA resulting from the CALL statement. If execution is successful, the command returns additional information. A single-partition database will return one result set; a multi-partition database will return two result sets.
  • Table 5: Result set for a load operation.
  • Table 6: Result set 2 contains information for each database partition in a multi-partition load operation.
Table 5. Result set returned by the LOAD command
Column name Data type Description
ROWS_READ BIGINT Number of rows read during the load operation.
ROWS_SKIPPED BIGINT Number of rows skipped before the load operation started. This information is returned for a single-partition database only.
ROWS_LOADED BIGINT Number of rows loaded into the target table. This information is returned for a single-partition database only.
ROWS_REJECTED BIGINT Number of rows that could not be loaded into the target table.
ROWS_DELETED BIGINT Number of duplicate rows that were not loaded into the target table. This information is returned for a single-partition database only.
ROWS_COMMITTED BIGINT Total number of rows processed: the number of rows successfully loaded into the target table, plus the number of skipped and rejected rows. This information is returned for a single-partition database only.
ROWS_PARTITIONED BIGINT Number of rows distributed by all database distributing agents. This information is returned for a multi-partition database only.
NUM_AGENTINFO_ENTRIES BIGINT Number of entries returned in the second result set for a multi-partition database. This is the number of agent information entries produced by the load operation. This information is returned for multi-partition database only.
MSG_RETRIEVAL VARCHAR(512) SQL statement that is used to retrieve messages created by this utility. For example,
SELECT SQLCODE, MSG
   FROM TABLE 
   (SYSPROC.ADMIN_GET_MSGS
   ('2203498_thx')) AS MSG
This information is returned only if the MESSAGES ON SERVER clause is specified.
MSG_REMOVAL VARCHAR(512) SQL statement that is used to clean up messages created by this utility. For example:
CALL SYSPROC.ADMIN_REMOVE_MSGS
   ('2203498_thx')
This information is returned only if the MESSAGES ON SERVER clause is specified.
Table 6. Result set 2 returned by the LOAD command for each database partition in a multi-partition database.
Column name Data type Description
DBPARTITIONNUM SMALLINT The database partition number on which the agent executed the load operation.
SQLCODE INTEGER Final SQLCODE resulting from the load processing.
TABSTATE VARCHAR(20) Table state after load operation has completed. It is one of:
  • LOADPENDING: Indicates that the load did not complete, but the table on the partition has been left in a LOAD PENDING state. A load restart or terminate operation must be done on the database partition.
  • NORMAL: Indicates that the load completed successfully on the database partition and the table was taken out of the LOAD IN PROGRESS (or LOAD PENDING) state. Note that the table might still be in Set Integrity Pending state if further constraints processing is required, but this state is not reported by this interface.
  • UNCHANGED: Indicates that the load did not complete due to an error, but the state of the table has not yet been changed. It is not necessary to perform a load restart or terminate operation on the database partition.
Note: Not all possible table states are returned by this interface.
AGENTTYPE VARCHAR(20) Agent type and is one of:
  • FILE_TRANSFER
  • LOAD
  • LOAD_TO_FILE
  • PARTITIONING
  • PRE_PARTITIONING

File type modifiers for the load utility

Table 7. Valid file type modifiers for the load utility: All file formats
Modifier Description
anyorder This modifier specifies that the preservation of the source data order is not required. Using this modifier yields significant additional performance benefits on SMP systems. Use this modifier with the cpu_parallelism parameter. If the value of the cpu_parallelism parameter is 1, this modifier is ignored. This modifier is not supported if the value of the SAVECOUNT parameter is greater the 0, because crash recovery after a consistency point requires that data be loaded in sequence. This modifier is implicitly turned on for all load operations for column-organized tables, multidimensional clustering (MDC) tables, and range-partitioned tables.
generatedignore This modifier informs the load utility that data for all generated columns is present in the data file but should be ignored. This results in all generated column values being generated by the utility. This modifier cannot be used with either the generatedmissing or the generatedoverride modifier.
generatedmissing If this modifier is specified, the utility assumes that the input data file contains no data for the generated column (not even NULLs). This results in all generated column values being generated by the utility. This modifier cannot be used with either the generatedignore or the generatedoverride modifier.
generatedoverride This modifier instructs the load utility to accept user-supplied data for all generated columns in the table (contrary to the normal rules for these types of columns). This is useful when migrating data from another database system, or when loading a table from data that was recovered using the RECOVER DROPPED TABLE option on the ROLLFORWARD DATABASE command. When this modifier is used, any rows with no data or NULL data for a non-nullable generated column will be rejected (SQL3116W). When this modifier is used, the table will be placed in Set Integrity Pending state. To take the table out of Set Integrity Pending state without verifying the user-supplied values, issue the following command after the load operation:
SET INTEGRITY FOR table-name GENERATED COLUMN 
  IMMEDIATE UNCHECKED
To take the table out of Set Integrity Pending state and force verification of the user-supplied values, issue the following command after the load operation:
SET INTEGRITY FOR table-name IMMEDIATE CHECKED.

When this modifier is specified and there is a generated column in any of the partitioning keys, dimension keys or distribution keys, then the LOAD command will automatically convert the modifier to generatedignore and proceed with the load. This will have the effect of regenerating all of the generated column values.

This modifier cannot be used with either the generatedmissing or the generatedignore modifier.

identityignore This modifier informs the load utility that data for the identity column is present in the data file but should be ignored. This results in all identity values being generated by the utility. The behavior will be the same for both GENERATED ALWAYS and GENERATED BY DEFAULT identity columns. This means that for GENERATED ALWAYS columns, no rows will be rejected. This modifier cannot be used with either the identitymissing or the identityoverride modifier.
identitymissing If this modifier is specified, the utility assumes that the input data file contains no data for the identity column (not even NULLs), and will therefore generate a value for each row. The behavior will be the same for both GENERATED ALWAYS and GENERATED BY DEFAULT identity columns. This modifier cannot be used with either the identityignore or the identityoverride modifier.
identityoverride This modifier should be used only when an identity column defined as GENERATED ALWAYS is present in the table to be loaded. It instructs the utility to accept explicit, non-NULL data for such a column (contrary to the normal rules for these types of identity columns). This is useful when migrating data from another database system when the table must be defined as GENERATED ALWAYS, or when loading a table from data that was recovered using the DROPPED TABLE RECOVERY option on the ROLLFORWARD DATABASE command. When this modifier is used, any rows with no data or NULL data for the identity column will be rejected (SQL3116W). This modifier cannot be used with either the identitymissing or the identityignore modifier. The load utility will not attempt to maintain or verify the uniqueness of values in the table's identity column when this option is used.
implicitlyhiddeninclude If this modifier is specified, the utility assumes that the input data file contains data for the implicitly hidden columns and this data will also be loaded. This modifier cannot be used with the implicitlyhiddenmissing modifier. See the Note: section for information about the precedence when multiple modifiers are specified.
implicitlyhiddenmissing If this modifier is specified, the utility assumes that the input data file does not contain data for the implicitly hidden columns and the utility will generate values for those hidden columns. This modifier cannot be used with the implicitlyhiddeninclude modifier. See the Note: section for information about the precedence when multiple modifiers are specified.
indexfreespace=x x is an integer between 0 and 99 inclusive. The value is interpreted as the percentage of each index page that is to be left as free space when load rebuilds the index. Load with INDEXING MODE INCREMENTAL ignores this option. The first entry in a page is added without restriction; subsequent entries are added to maintain the percent free space threshold. The default value is the one used at CREATE INDEX time.

This value takes precedence over the PCTFREE value specified in the CREATE INDEX statement. The indexfreespace option affects index leaf pages only.

lobsinfile lob-path specifies the path to the files containing LOB data. The ASC, DEL, or IXF load input files contain the names of the files having LOB data in the LOB column.

This option is not supported in conjunction with the CURSOR filetype.

The LOBS FROM clause specifies where the LOB files are located when the lobsinfile modifier is used. The LOBS FROM clause will implicitly activate the lobsinfile behavior. The LOBS FROM clause conveys to the LOAD utility the list of paths to search for the LOB files while loading the data.

Each path contains at least one file that contains at least one LOB pointed to by a Lob Location Specifier (LLS) in the data file. The LLS is a string representation of the location of a LOB in a file stored in the LOB file path. The format of an LLS is filename.ext.nnn.mmm/, where filename.ext is the name of the file that contains the LOB, nnn is the offset in bytes of the LOB within the file, and mmm is the length of the LOB in bytes. For example, if the string db2exp.001.123.456/ is stored in the data file, the LOB is located at offset 123 in the file db2exp.001, and is 456 bytes long.

To indicate a null LOB , enter the size as -1. If the size is specified as 0, it is treated as a 0 length LOB. For null LOBS with length of -1, the offset and the file name are ignored. For example, the LLS of a null LOB might be db2exp.001.7.-1/.

noheader Skips the header verification code (applicable only to load operations into tables that reside in a single-partition database partition group).

If the default MPP load (mode PARTITION_AND_LOAD) is used against a table residing in a single-partition database partition group, the file is not expected to have a header. Thus the noheader modifier is not needed. If the LOAD_ONLY mode is used, the file is expected to have a header. The only circumstance in which you should need to use the noheader modifier is if you wanted to perform LOAD_ONLY operation using a file that does not have a header.

norowwarnings Suppresses all warnings about rejected rows.
pagefreespace=x x is an integer between 0 and 100 inclusive. The value is interpreted as the percentage of each data page that is to be left as free space. If the specified value is invalid because of the minimum row size, (for example, a row that is at least 3 000 bytes long, and an x value of 50), the row will be placed on a new page. If a value of 100 is specified, each row will reside on a new page. The PCTFREE value of a table determines the amount of free space designated per page. If a pagefreespace value on the load operation or a PCTFREE value on a table have not been set, the utility will fill up as much space as possible on each page. The value set by pagefreespace overrides the PCTFREE value specified for the table.
periodignore This modifier informs the load utility that data for the period columns is present in the data file but should be ignored. When this modifier is specified, all period column values are generated by the utility. This modifier cannot be used with the periodmissing modifier or the periodoverride modifier.
periodmissing If this modifier is specified, the utility assumes that the input data file contains no data for the period columns. When this modifier is specified, all period column values are generated by the utility. This modifier cannot be used with the periodignore modifier or the periodoverride modifier.
periodoverride This modifier instructs the load utility to accept user-supplied data for GENERATED ALWAYS AS ROW BEGIN and GENERATED ALWAYS AS ROW END columns in a system-period temporal table. This behavior is contrary to the normal rules for these types of columns. The modifier can be useful when you want to maintain history data and load data that includes time stamps into a system-period temporal table. When this modifier is used, any rows with no data or NULL data in a ROW BEGIN or ROW END column are rejected.
rowchangetimestampignore This modifier informs the load utility that data for the row change timestamp column is present in the data file but should be ignored. This results in all ROW CHANGE TIMESTAMPs being generated by the utility. The behavior will be the same for both GENERATED ALWAYS and GENERATED BY DEFAULT columns. This means that for GENERATED ALWAYS columns, no rows will be rejected. This modifier cannot be used with either the rowchangetimestampmissing or the rowchangetimestampoverride modifier.
rowchangetimestampmissing If this modifier is specified, the utility assumes that the input data file contains no data for the row change timestamp column (not even NULLs), and will therefore generate a value for each row. The behavior will be the same for both GENERATED ALWAYS and GENERATED BY DEFAULT columns. This modifier cannot be used with either the rowchangetimestampignore or the rowchangetimestampoverride modifier.
rowchangetimestampoverride This modifier should be used only when a row change timestamp column defined as GENERATED ALWAYS is present in the table to be loaded. It instructs the utility to accept explicit, non-NULL data for such a column (contrary to the normal rules for these types of row change timestamp columns). This is useful when migrating data from another database system when the table must be defined as GENERATED ALWAYS, or when loading a table from data that was recovered using the DROPPED TABLE RECOVERY option on the ROLLFORWARD DATABASE command. When this modifier is used, any rows with no data or NULL data for the ROW CHANGE TIMESTAMP column will be rejected (SQL3116W). This modifier cannot be used with either the rowchangetimestampmissing or the rowchangetimestampignore modifier. The load utility will not attempt to maintain or verify the uniqueness of values in the table's row change timestamp column when this option is used.
seclabelchar Indicates that security labels in the input source file are in the string format for security label values rather than in the default encoded numeric format. LOAD converts each security label into the internal format as it is loaded. If a string is not in the proper format the row is not loaded and a warning (SQLSTATE 01H53, SQLCODE SQL3242W) is returned. If the string does not represent a valid security label that is part of the security policy protecting the table then the row is not loaded and a warning (SQLSTATE 01H53, SQLCODE SQL3243W) is returned.

This modifier cannot be specified if the seclabelname modifier is specified, otherwise the load fails and an error (SQLCODE SQL3525N) is returned.

If you have a table consisting of a single DB2SECURITYLABEL column, the data file might look like this:
"CONFIDENTIAL:ALPHA:G2"
"CONFIDENTIAL;SIGMA:G2"
"TOP SECRET:ALPHA:G2"
To load or import this data, the seclabelchar file type modifier must be used:
LOAD FROM input.del OF DEL MODIFIED BY SECLABELCHAR INSERT INTO t1
seclabelname Indicates that security labels in the input source file are indicated by their name rather than the default encoded numeric format. LOAD will convert the name to the appropriate security label if it exists. If no security label exists with the indicated name for the security policy protecting the table the row is not loaded and a warning (SQLSTATE 01H53, SQLCODE SQL3244W) is returned.

This modifier cannot be specified if the seclabelchar modifier is specified, otherwise the load fails and an error (SQLCODE SQL3525N) is returned.

If you have a table consisting of a single DB2SECURITYLABEL column, the data file might consist of security label names similar to:
"LABEL1"
"LABEL1"
"LABEL2"
To load or import this data, the seclabelname file type modifier must be used:
   LOAD FROM input.del OF DEL MODIFIED BY SECLABELNAME INSERT INTO t1
Note: If the file type is ASC, any spaces following the name of the security label will be interpreted as being part of the name. To avoid this use the striptblanks file type modifier to make sure the spaces are removed.
totalfreespace=x x is an integer greater than or equal to 0. The value is interpreted as the percentage of the total pages in the table that is to be appended to the end of the table as free space. For example, if x is 20, and the table has 100 data pages after the data has been loaded, 20 additional empty pages will be appended. The total number of data pages for the table will be 120. The data pages total does not factor in the number of index pages in the table. This option does not affect the index object. If two loads are done with this option specified, the second load will not reuse the extra space appended to the end by the first load.
transactionidignore This modifier informs the load utility that data for the TRANSACTION START ID column is present in the data file but should be ignored. When this modifier is specified, the value for the TRANSACTION START ID column is generated by the utility. This modifier cannot be used with the transactionidmissing modifier or the transactionidoverride modifier.
transactionidmissing If this modifier is specified, the utility assumes that the input data file contains no data for the TRANSACTION START ID columns. When this modifier is specified, the value for the TRANSACTION START ID column is generated by the utility. This modifier cannot be used with the transactionidignore modifier or the transactionidoverride modifier.
transactionidoverride This modifier instructs the load utility to accept user-supplied data for the GENERATED ALWAYS AS TRANSACTION START ID column in a system-period temporal table. This behavior is contrary to the normal rules for this type of column. When this modifier is used, any rows with no data or NULL data in a TRANSACTION START ID column are rejected.
usedefaults If a source column for a target table column has been specified, but it contains no data for one or more row instances, default values are loaded. Examples of missing data are:
  • For DEL files: two adjacent column delimiters (",,") or two adjacent column delimiters separated by an arbitrary number of spaces (", ,") are specified for a column value.
  • For DEL/ASC files: A row that does not have enough columns, or is not long enough for the original specification. For ASC files, NULL column values are not considered explicitly missing, and a default will not be substituted for NULL column values. NULL column values are represented by all space characters for numeric, date, time, and /timestamp columns, or by using the NULL INDICATOR for a column of any type to indicate the column is NULL.
Without this option, if a source column contains no data for a row instance, one of the following occurs:
  • For DEL/ASC files: If the column is nullable, a NULL is loaded. If the column is not nullable, the utility rejects the row.
Table 8. Valid file type modifiers for the load utility: ASCII file formats (ASC/DEL)
Modifier Description
codepage=x x is an ASCII character string. The value is interpreted as the code page of the data in the input data set. Converts character data (and numeric data specified in characters) from this code page to the database code page during the load operation.
The following rules apply:
  • For pure DBCS (graphic), mixed DBCS, and EUC, delimiters are restricted to the range of x00 to x3F, inclusive.
  • For DEL data specified in an EBCDIC code page, the delimiters might not coincide with the shift-in and shift-out DBCS characters.
  • nullindchar must specify symbols included in the standard ASCII set between code points x20 and x7F, inclusive. This refers to ASCII symbols and code points. EBCDIC data can use the corresponding symbols, even though the code points will be different.

This option is not supported in conjunction with the CURSOR filetype.

dateformat="x" x is the format of the date in the source file.1 Valid date elements are:
YYYY - Year (four digits ranging from 0000 - 9999)
M    - Month (one or two digits ranging from 1 - 12)
MM   - Month (two digits ranging from 01 - 12;
         mutually exclusive with M)
D    - Day (one or two digits ranging from 1 - 31)
DD   - Day (two digits ranging from 01 - 31;
         mutually exclusive with D)
DDD  - Day of the year (three digits ranging
         from 001 - 366; mutually exclusive
         with other day or month elements)
A default value of 1 is assigned for each element that is not specified. Some examples of date formats are:
"D-M-YYYY"
"MM.DD.YYYY"
"YYYYDDD"
dumpfile = x x is the fully qualified (according to the server database partition) name of an exception file to which rejected rows are written. A maximum of 32 KB of data is written per record. The following section is an example that shows how to specify a dump file:
db2 load from data of del
   modified by dumpfile = /u/user/filename
   insert into table_name

The file will be created and owned by the instance owner. To override the default file permissions, use the dumpfileaccessall file type modifier.

Note:
  1. In a partitioned database environment, the path should be local to the loading database partition, so that concurrently running load operations do not attempt to write to the same file.
  2. The contents of the file are written to disk in an asynchronous buffered mode. In the event of a failed or an interrupted load operation, the number of records committed to disk cannot be known with certainty, and consistency cannot be guaranteed after a LOAD RESTART. The file can only be assumed to be complete for a load operation that starts and completes in a single pass.
  3. If the specified file already exists, it will not be re-created, but it will be truncated.
dumpfileaccessall Grants read access to 'OTHERS' when a dump file is created.
This file type modifier is only valid when:
  1. it is used in conjunction with dumpfile file type modifier
  2. the user has SELECT privilege on the load target table
  3. it is issued on a DB2 server database partition that resides on a UNIX operating system

If the specified file already exists, its permissions will not be changed.

fastparse Use with caution. Reduces syntax checking on user-supplied column values, and enhances performance. Tables are guaranteed to be architecturally correct (the utility performs sufficient data checking to prevent a segmentation violation or trap), however, the coherence of the data is not validated. Only use this option if you are certain that your data is coherent and correct. For example, if the user-supplied data contains an invalid timestamp column value of :1>0-00-20-07.11.12.000000, this value is inserted into the table if fastparse is specified, and rejected if fastparse is not specified.
implieddecimal The location of an implied decimal point is determined by the column definition; it is no longer assumed to be at the end of the value. For example, the value 12345 is loaded into a DECIMAL(8,2) column as 123.45, not 12345.00.

This modifier cannot be used with the packeddecimal modifier.

timeformat="x" x is the format of the time in the source file.1 Valid time elements are:
H     - Hour (one or two digits ranging from 0 - 12
           for a 12 hour system, and 0 - 24
           for a 24 hour system)
HH    - Hour (two digits ranging from 00 - 12
           for a 12 hour system, and 00 - 24
           for a 24 hour system; mutually exclusive
             with H)
M     - Minute (one or two digits ranging
           from 0 - 59)
MM    - Minute (two digits ranging from 00 - 59;
           mutually exclusive with M)
S     - Second (one or two digits ranging
           from 0 - 59)
SS    - Second (two digits ranging from 00 - 59;
           mutually exclusive with S)
SSSSS - Second of the day after midnight (5 digits
           ranging from 00000 - 86400; mutually
           exclusive with other time elements)
TT    - Meridian indicator (AM or PM)
A default value of 0 is assigned for each element that is not specified. Some examples of time formats are:
"HH:MM:SS"
"HH.MM TT"
"SSSSS"
timestampformat="x" x is the format of the time stamp in the source file.1 Valid time stamp elements are:
YYYY   - Year (four digits ranging from 0000 - 9999)
M      - Month (one or two digits ranging from 1 - 12)
MM     - Month (two digits ranging from 01 - 12;
            mutually exclusive with M and MMM)
MMM    - Month (three-letter case-insensitive abbreviation for 
            the month name; mutually exclusive with M and MM)              
D      - Day (one or two digits ranging from 1 - 31)
DD     - Day (two digits ranging from 01 - 31; mutually exclusive with D)
DDD    - Day of the year (three digits ranging from 001 - 366; 
            mutually exclusive with other day or month elements)
H      - Hour (one or two digits ranging from 0 - 12
            for a 12 hour system, and 0 - 24 for a 24 hour system)
HH     - Hour (two digits ranging from 00 - 12 
            for a 12 hour system, and 00 - 24 for a 24 hour system; 
            mutually exclusive with H)
M      - Minute (one or two digits ranging from 0 - 59)
MM     - Minute (two digits ranging from 00 - 59;
            mutually exclusive with M, minute)
S      - Second (one or two digits ranging from 0 - 59)
SS     - Second (two digits ranging from 00 - 59;
            mutually exclusive with S)
SSSSS  - Second of the day after midnight (5 digits
            ranging from 00000 - 86400; mutually
            exclusive with other time elements)
U (1 to 12 times)
         - Fractional seconds(number of occurrences of U represent the 
              number of digits with each digit ranging from 0 to 9
TT     - Meridian indicator (AM or PM)
timestampformat="x" (Continued)
A default value of 1 is assigned for unspecified YYYY, M, MM, D, DD, or DDD elements. A default value of 'Jan' is assigned to an unspecified MMM element. A default value of 0 is assigned for all other unspecified elements. The following section is an example of a time stamp format:
   "YYYY/MM/DD HH:MM:SS.UUUUUU"

The valid values for the MMM element include: 'jan', 'feb', 'mar', 'apr', 'may', 'jun', 'jul', 'aug', 'sep', 'oct', 'nov' and 'dec'. These values are case insensitive.

If the timestampformat modifier is not specified, the load utility formats the timestamp field using one of two possible formats:
YYYY-MM-DD-HH.MM.SS 
YYYY-MM-DD HH:MM:SS

The load utility chooses the format by looking at the separator between the DD and HH. If it is a dash '-', the load utility uses the regular dashes and dots format (YYYY-MM-DD-HH.MM.SS). If it is a blank space, then the load utility expects a colon ':' to separate the HH, MM and SS.

In either format, if you include the microseconds field (UUUUUU), the load utility expects the dot '.' as the separator. Either YYYY-MM-DD-HH.MM.SS.UUUUUU or YYYY-MM-DD HH:MM:SS.UUUUUU are acceptable.

The following example illustrates how to load data containing user defined date and time formats into a table called schedule:
db2 load from delfile2 of del
    modified by timestampformat="yyyy.mm.dd hh:mm tt"
    insert into schedule
usegraphiccodepage If usegraphiccodepage is given, the assumption is made that data being loaded into graphic or double-byte character large object (DBCLOB) data field(s) is in the graphic code page. The rest of the data is assumed to be in the character code page. The graphic codepage is associated with the character code page. LOAD determines the character code page through either the codepage modifier, if it is specified, or through the code page of the database if the codepage modifier is not specified.

This modifier should be used in conjunction with the delimited data file generated by drop table recovery only if the table being recovered has graphic data.

Restrictions

The usegraphiccodepage modifier MUST NOT be specified with DEL files created by the EXPORT utility, as these files contain data encoded in only one code page. The usegraphiccodepage modifier is also ignored by the double-byte character large objects (DBCLOBs) in files.

xmlchar Specifies that XML documents are encoded in the character code page.

This option is useful for processing XML documents that are encoded in the specified character code page but do not contain an encoding declaration.

For each document, if a declaration tag exists and contains an encoding attribute, the encoding must match the character code page, otherwise the row containing the document will be rejected. Note that the character codepage is the value specified by the codepage file type modifier, or the application codepage if it is not specified. By default, either the documents are encoded in Unicode, or they contain a declaration tag with an encoding attribute.

xmlgraphic Specifies that XML documents are encoded in the specified graphic code page.

This option is useful for processing XML documents that are encoded in a specific graphic code page but do not contain an encoding declaration.

For each document, if a declaration tag exists and contains an encoding attribute, the encoding must match the graphic code page, otherwise the row containing the document will be rejected. Note that the graphic code page is the graphic component of the value specified by the codepage file type modifier, or the graphic component of the application code page if it is not specified. By default, documents are either encoded in Unicode, or they contain a declaration tag with an encoding attribute.

Table 9. Valid file type modifiers for the load utility: ASC file formats (Non-delimited ASCII)
Modifier Description
binarynumerics Numeric (but not DECIMAL) data must be in binary form, not the character representation. This avoids costly conversions.

This option is supported only with positional ASC, using fixed length records specified by the reclen option.

The following rules apply:
  • No conversion between data types is performed, with the exception of BIGINT, INTEGER, and SMALLINT.
  • Data lengths must match their target column definitions.
  • FLOATs must be in IEEE Floating Point format.
  • Binary data in the load source file is assumed to be big-endian, regardless of the platform on which the load operation is running.

NULLs cannot be present in the data for columns affected by this modifier. Blanks (normally interpreted as NULL) are interpreted as a binary value when this modifier is used.

nochecklengths If nochecklengths is specified, an attempt is made to load each row, even if the source data has a column definition that exceeds the size of the target table column. Such rows can be successfully loaded if code page conversion causes the source data to shrink; for example, 4-byte EUC data in the source could shrink to 2-byte DBCS data in the target, and require half the space. This option is particularly useful if it is known that the source data will fit in all cases despite mismatched column definitions.
nullindchar=x x is a single character. Changes the character denoting a NULL value to x. The default value of x is Y.2

This modifier is case sensitive for EBCDIC data files, except when the character is an English letter. For example, if the NULL indicator character is specified to be the letter N, then n is also recognized as a NULL indicator.

packeddecimal Loads packed-decimal data directly, since the binarynumerics modifier does not include the DECIMAL field type.

This option is supported only with positional ASC, using fixed length records specified by the reclen option.

Supported values for the sign nibble are:
+ = 0xC 0xA 0xE 0xF
 - = 0xD 0xB

NULLs cannot be present in the data for columns affected by this modifier. Blanks (normally interpreted as NULL) are interpreted as a binary value when this modifier is used.

Regardless of the server platform, the byte order of binary data in the load source file is assumed to be big-endian; that is, when using this modifier on Windows operating systems, the byte order must not be reversed.

This modifier cannot be used with the implieddecimal modifier.

reclen=x x is an integer with a maximum value of 32 767. x characters are read for each row, and a newline character is not used to indicate the end of the row.
striptblanks Truncates any trailing blank spaces when loading data into a variable-length field. If this option is not specified, blank spaces are kept.

This option cannot be specified together with striptnulls. These are mutually exclusive options. This option replaces the obsolete t option, which is supported for earlier compatibility only.

striptnulls Truncates any trailing NULLs (0x00 characters) when loading data into a variable-length field. If this option is not specified, NULLs are kept.

This option cannot be specified together with striptblanks. These are mutually exclusive options. This option replaces the obsolete padwithzero option, which is supported for earlier compatibility only.

zoneddecimal Loads zoned decimal data, since the binarynumerics modifier does not include the DECIMAL field type. This option is supported only with positional ASC, using fixed length records specified by the reclen option.
Half-byte sign values can be one of the following value:
+ = 0xC 0xA 0xE 0xF 0x3
- = 0xD 0xB 0x7

Supported values for digits are 0x0 to 0x9.

Supported values for zones are 0x3 and 0xF.

Table 10. Valid file type modifiers for the load utility: DEL file formats (Delimited ASCII)
Modifier Description
chardelx x is a single character string delimiter. The default value is a double quotation mark ("). The specified character is used in place of double quotation marks to enclose a character string.23 If you want to explicitly specify the double quotation mark (") as the character string delimiter, you should specify it as follows:
modified by chardel""
The single quotation mark (') can also be specified as a character string delimiter as follows:
modified by chardel''
coldelx x is a single character column delimiter. The default value is a comma (,). The specified character is used in place of a comma to signal the end of a column.23
decplusblank Plus sign character. Causes positive decimal values to be prefixed with a blank space instead of a plus sign (+). The default action is to prefix positive decimal values with a plus sign.
decptx x is a single character substitute for the period as a decimal point character. The default value is a period (.). The specified character is used in place of a period as a decimal point character.23
delprioritychar The current default priority for delimiters is: record delimiter, character delimiter, column delimiter. This modifier protects existing applications that depend on the older priority by reverting the delimiter priorities to: character delimiter, record delimiter, column delimiter. Syntax:
db2 load ... modified by delprioritychar ...
For example, given the following DEL data file:
"Smith, Joshua",4000,34.98<row delimiter>
"Vincent,<row delimiter>, is a manager", ...
... 4005,44.37<row delimiter>

With the delprioritychar modifier specified, there will be only two rows in this data file. The second <row delimiter> will be interpreted as part of the first data column of the second row, while the first and the third <row delimiter> are interpreted as actual record delimiters. If this modifier is not specified, there will be three rows in this data file, each delimited by a <row delimiter>.

keepblanks Preserves the leading and trailing blanks in each field of type CHAR, VARCHAR, LONG VARCHAR, or CLOB. Without this option, all leading and trailing blanks that are not inside character delimiters are removed, and a NULL is inserted into the table for all blank fields.
The following example illustrates how to load data into a table called TABLE1, while preserving all leading and trailing spaces in the data file:
db2 load from delfile3 of del
   modified by keepblanks
   insert into table1
nochardel The load utility will assume all bytes found between the column delimiters to be part of the column's data. Character delimiters will be parsed as part of column data. This option should not be specified if the data was exported using a DB2 database system (unless nochardel was specified at export time). It is provided to support vendor data files that do not have character delimiters. Improper usage might result in data loss or corruption.

This option cannot be specified with chardelx, delprioritychar or nodoubledel. These are mutually exclusive options.

nodoubledel Suppresses recognition of double character delimiters.
Table 11. Valid file type modifiers for the load utility: IXF file format
Modifier Description
forcein Directs the utility to accept data despite code page mismatches, and to suppress translation between code pages.

Fixed length target fields are checked to verify that they are large enough for the data. If nochecklengths is specified, no checking is done, and an attempt is made to load each row.

nochecklengths If nochecklengths is specified, an attempt is made to load each row, even if the source data has a column definition that exceeds the size of the target table column. Such rows can be successfully loaded if code page conversion causes the source data to shrink; for example, 4-byte EUC data in the source could shrink to 2-byte DBCS data in the target, and require half the space. This option is particularly useful if it is known that the source data will fit in all cases despite mismatched column definitions.
Note:
  1. Double quotation marks around the date format string are mandatory. Field separators cannot contain any of the following characters: a-z, A-Z, and 0-9. The field separator should not be the same as the character delimiter or field delimiter in the DEL file format. A field separator is optional if the start and end positions of an element are unambiguous. Ambiguity can exist if (depending on the modifier) elements such as D, H, M, or S are used, because of the variable length of the entries.
    For time stamp formats, care must be taken to avoid ambiguity between the month and the minute descriptors, since they both use the letter M. A month field must be adjacent to other date fields. A minute field must be adjacent to other time fields. Following are some ambiguous time stamp formats:
    "M" (could be a month, or a minute)
    "M:M" (Which is which?)
    "M:YYYY:M" (Both are interpreted as month.)
    "S:M:YYYY" (adjacent to both a time value and a date value)
    In ambiguous cases, the utility will report an error message, and the operation will fail.
    Following are some unambiguous time stamp formats:
    "M:YYYY" (Month)
    "S:M" (Minute)
    "M:YYYY:S:M" (Month....Minute)
    "M:H:YYYY:M:D" (Minute....Month)

    Some characters, such as double quotation marks and back slashes, must be preceded by an escape character (for example, \).

  2. Character values provided for the chardel, coldel, or decpt file type modifiers must be specified in the code page of the source data.
    The character code point (instead of the character symbol), can be specified using the syntax xJJ or 0xJJ, where JJ is the hexadecimal representation of the code point. For example, to specify the # character as a column delimiter, use one of the following statements:
    ... modified by coldel# ...
    ... modified by coldel0x23 ...
    ... modified by coldelX23 ...
  3. "Delimiter considerations for moving data" lists restrictions that apply to the characters that can be used as delimiter overrides.
  4. The load utility does not issue a warning if an attempt is made to use unsupported file types with the MODIFIED BY option. If this is attempted, the load operation fails, and an error code is returned.
  5. When multiple modifiers suffixed with ignore, include, missing, and override are specified, they are applied in the order that they are listed. In the following statement, data for implicitly hidden columns that are not identity columns is included in the input data. While data for all identity columns, regardless of their implicitly hidden status, is not.
    db2 load from delfile1 of del modified by 
       implicitlyhiddeninclude identitymissing insert into table1 
    However, changing the order of the file type modifiers in the following statement means that data for all implicitly hidden columns (including hidden identity columns) is included in the input data. While data for identity columns that are not implicitly hidden is not.
    db2 load from delfile1 of del modified by 
       identitymissing implicitlyhiddeninclude insert into table1 
Table 12. LOAD behavior when using codepage and usegraphiccodepage
codepage=N usegraphiccodepage LOAD behavior
Absent Absent All data in the file is assumed to be in the database code page, not the application code page, even if the CLIENT option is specified.
Present Absent All data in the file is assumed to be in code page N.

Warning: Graphic data will be corrupted when loaded into the database if N is a single-byte code page.

Absent Present Character data in the file is assumed to be in the database code page, even if the CLIENT option is specified. Graphic data is assumed to be in the code page of the database graphic data, even if the CLIENT option is specified.

If the database code page is single-byte, then all data is assumed to be in the database code page.

Warning: Graphic data will be corrupted when loaded into a single-byte database.

Present Present Character data is assumed to be in code page N. Graphic data is assumed to be in the graphic code page of N.

If N is a single-byte or double-byte code page, then all data is assumed to be in code page N.

Warning: Graphic data will be corrupted when loaded into the database if N is a single-byte code page.