What's new in SQL Syntax for Informix, Version 12.10

This publication includes information about new features and changes in existing functionality.

For a complete list of what's new in this release, go to What's new in Informix®.

Table 1. What's new in IBM Informix Guide to SQL: Syntax for Version 12.10.xC8
Overview Reference
Rename system-generated indexes

You can rename a system-generated index that implements a primary-key or foreign-key constraint. When you rename a system-generated index, the index is not automatically dropped if you drop the corresponding constraint. You can also rename a system-generated index to make the name more meaningful.

RENAME INDEX statement
Consistent sharded insert, update, and delete operations

When you run sharded operations that insert, update, or delete data, the transactions are now applied with the two-phase commit protocol instead of being eventually consistent. Data is moved to the appropriate shard server before the transaction is committed.

For sharding with Enterprise Replication commands, you must set the new USE_SHARDING session environment option to enable consistent sharded insert, update, and delete operations.

USE_SHARDING session environment option
Suspend validation of check constraints

You can now temporarily disable check constraints with the NOVALIDATE keyword. When you create or enable a check constraint, you can speed up the statement by including the NOVALIDATE keyword to skip the checking of existing rows for violations. The check constraint is enabled when the statement completes.

CHECK Clause

NOVALIDATE session environment option

Table 2. What's new in IBM Informix Guide to SQL: Syntax for Version 12.10.xC7
Overview Reference
COALESCE function

You can now evaluate a series of expressions to return the first non-null value by running the COALESCE function.

COALESCE Function
Update BSON arrays in an SQL statement

When you run the BSON_UPDATE function in an SQL statement, you can now include the MongoDB array update operators $addToSet, $pop, $pullAll, and $push, plus the array update operator modifiers $each, $position, $slice, and $sort.

BSON_UPDATE function
Table 3. What's new in IBM Informix Guide to SQL: Syntax for Version 12.10.xC6
Overview Reference
Avoid caching SQL statements with unpredictable query plans

Some SQL statements produce significantly different query plans depending on the values of the placeholders that are passed to the database server when the statements are run. Using a cached query plan for such a statement might result in poor performance. You can now avoid caching an SQL statement whose query plan is unpredictable by including the AVOID_STMT_CACHE optimizer directive.

Statement cache directive
Table 4. What's new in IBM Informix Guide to SQL: Syntax for Version 12.10.xC5
Overview Reference
Manipulate JSON and BSON data with SQL statements

You can use SQL statements to manipulate BSON data. You can create BSON columns with the SQL CREATE TABLE statement. You can manipulate BSON data in a collection that was created by a MongoDB API command. You can use the CREATE INDEX statement to create an index on a field in a BSON column. You can insert data with SQL statements or Informix utilities. You can view BSON data by casting the data to JSON format or running the new BSON value functions to convert BSON field values into standard SQL data types, such as INTEGER and LVARCHAR. You can use the new BSON_GET and BSON_UPDATE functions to operate on field-value pairs.

BSON and JSON built-in opaque data types

BSON processing functions

Correlated aggregate expressions

In a subquery, an aggregate expression with a column operand that was declared in a parent query block is called a correlated aggregate. The column operand is called a correlated column reference. When a subquery contains a correlated aggregate with a correlated column reference, the database server now evaluates that aggregate in the parent query block where the correlated column reference was declared. If the aggregate contains multiple correlated column references, the aggregate is processed in the parent query block (where the correlated column reference originated) that is the nearest parent to the subquery.

Selecting correlated aggregates in subqueries
Control repreparation

You can improve the speed of queries by controlling when queries are automatically reprepared. The AUTO_REPREPARE configuration parameter and the IFX_AUTO_REPREPARE session environment option support these additional values:

  • 3 = Enables automatic repreparation in optimistic mode. If a statement ran correctly less than one second ago, do not reprepare the statement.
  • 5 = Enables automatic repreparation after UPDATE STATISTICS is run. If a statement includes a table on which UPDATE STATISTICS was run, reprepare the statement.
  • 7 = Enables automatic repreparation in optimistic mode and after UPDATE STATISTICS is run.
IFX_AUTO_REPREPARE session environment option
Table 5. What's new in IBM Informix Guide to SQL: Syntax for Version 12.10.xC4
Overview Reference
Quickly export relational tables to BSON or JSON documents

You can export relational tables to BSON or JSON documents faster by running the new genBSON() SQL function than by retrieving the data with MongoDB commands through the wire listener. For example, you can provide relational data to an application that displays data in JSON or BSON format. By default, the genBSON() function exports relational tables to a BSON document. You can cast the genBSON() function to JSON to create a JSON document.

genBSON function
SQL compatibility: LIMIT clause allowed after the Projection clause

You can include the new LIMIT clause after the optional ORDER BY clause in a SELECT statement. Use the LIMIT clause to specify the maximum number of rows the query can return. The LIMIT clause has the same effect as the LIMIT option, except that the LIMIT option must be included in the Projection clause of the SELECT statement.

LIMIT Clause

Restricting return values with the SKIP, LIMIT, and FIRST options

Limit the number of locks for a session

You can prevent users from acquiring too many locks by limiting the number of locks for each user without administrative privileges for a session. Set the SESSION_LIMIT_LOCKS configuration parameter or the IFX_SESSION_LIMIT_LOCKS option to the SET ENVIRONMENT statement.

IFX_SESSION_LIMIT_LOCKS session environment option
Table 6. What's new in IBM Informix Guide to SQL: Syntax for Version 12.10.xC3
Overview Reference
Find the quarter of the calendar year for dates

You can find the quarter of the calendar year for a date by running the QUARTER function. The QUARTER function accepts a DATE or DATETIME argument, and returns an integer in the range 1 - 4, indicating the quarter of a calendar year. For example, on any date in July, August, or September, the expression QUARTER (CURRENT) returns 3 because those dates are in the third quarter of a calendar year. You can include the QUARTER function in queries and in statements to define a distributed storage strategy that is based on a DATE or DATETIME column, such as the PARTITION BY EXPRESSION clause of the CREATE TABLE or ALTER TABLE statements.

QUARTER Function
Retrying connections

Previously, you might set the INFORMIXCONTIME and INFORMIXCONRETRY environment variables in the client environment before you started the database server. The values specified the number of seconds that the client session spends trying to connect to the database server, and the number of connection attempts. As of this fix pack, you also can control the duration and frequency of connection attempts in other ways.

You can use the SET ENVIRONMENT SQL statement to set the INFORMIXCONTIME and INFORMIXCONRETRY environment options for the current session. That statement overrides the values that are set by the other methods.

SET ENVIRONMENT statement

INFORMIXCONTIME session environment option

INFORMIXCONRETRY session environment option

Automatic location and fragmentation

In previous releases, the default location for new databases was the root dbspace. The default location for new tables and indexes was in the dbspace of the corresponding database. By default new tables were not fragmented. As of 12.10.xC3, you can enable the database server to automatically choose the location for new databases, tables, and indexes. The location selection is based on an algorithm that gives higher priority to non-critical dbspaces and dbspaces with an optimal page size. New tables are automatically fragmented in round-robin order in the available dbspaces.

Set the AUTOLOCATE configuration parameter or session environment option to the number of initial round-robin fragments to create for new tables. By default, all dbspaces are available. More fragments are added as needed when the table grows. You can manage the list of dbspaces for table fragments by running the admin() or task() SQL administration API command with one of the autolocate datatabase arguments.

SET ENVIRONMENT statement

AUTOLOCATE session environment option

Temporarily prevent constraint validation

You can significantly increase the speed of loading or migrating large tables by temporarily preventing the database server from validating foreign-key referential constraints. You can disable the validation of constraints when you create constraints or change the mode of constraints to ENABLED or FILTERING.

  • You include the NOVALIDATE keyword in an ALTER TABLE ADD CONSTRAINT statement or in a SET CONSTRAINTS ENABLED or SET CONSTRAINTS FILTERING statements.
  • If you plan to run multiple ALTER TABLE ADD CONSTRAINT or SET CONSTRAINTS statements, run the SET ENVIRONMENT NOVALIDATE ON statement to disable the validation of foreign-key constraints during the current session.

The NOVALIDATE keyword prevents the database server from checking every row for referential integrity during ALTER TABLE ADD CONSTRAINT and SET CONSTRAINTS operations on foreign-key constraints. When those statements finish running, the database server automatically resumes referential-integrity enforcement of those constraints in subsequent DML operations.

Use this feature only on tables whose enabled foreign-key constraints are free of violations, or when the referential constraints can be validated after the tables are loaded or migrated to the target database.

Creating foreign-key constraints in NOVALIDATE modes

SET CONSTRAINTS statement

Modes for constraints and unique indexes

NOVALIDATE session environment option

Faster creation of foreign-key constraints

When you run the ALTER TABLE ADD CONSTRAINT statement, some foreign-key constraints can be created faster if the table has a unique index or a primary-key constraint that is already defined on the columns in the foreign-key constraint.

Foreign-key constraints are not created faster, however, if the constraint key or index key includes columns of user-defined or opaque data types, including BOOLEAN and LVARCHAR, or if other restrictions are true for the foreign-key constraint or for the referenced table.

Creating foreign-key constraints when an index exists on the referenced table

Enabling foreign-key constraints when an index exists on the referenced table

Table 7. What's new in IBM Informix Guide to SQL: Syntax for Version 12.10.xC2
Overview Reference
Joins with lateral references

In queries that join result tables in the FROM clause, you can now use the LATERAL keyword to reference previous table and column aliases in the FROM clause. The LATERAL keyword must immediately precede any query in the FROM clause that defines a derived table as its result set, if that query references any table or column that appears earlier in the left-to-right order of FROM clause syntax elements. For SELECT statements that join derived tables, lateral table and column references comply with the ISO/ANSI standard for SQL syntax, and can improve performance. Lateral references are also valid in DELETE, UPDATE, and CREATE VIEW statements that include derived tables.

Lateral derived tables
Defining separators for fractional seconds in date-time values

Now you can control which separator to use in the character-string representation of fractional seconds. To define a separator between seconds and fractional seconds, you must include a literal character between the %S and %F directives when you set the GL_DATETIME or DBTIME environment variable, or when you call the TO_CHAR function. By default, a separator is not used between seconds and fractional seconds. Previously, the ASCII 46 character, a period ( . ), was inserted before the fractional seconds, regardless of whether the formatting string included an explicit separator for the two fields.

TO_CHAR Function
Table 8. What's new in IBM Informix Guide to SQL: Syntax for Version 12.10.xC1
Overview Reference
Autonomic storage management for rolling window tables

For tables that use a RANGE INTERVAL distributed storage strategy, the database server creates new fragments automatically when an inserted record has a fragment key value outside the range of any existing fragment. You can use new DDL syntax to define a purge policy for archiving or deleting interval fragments after the table exceeds a user-specified limit on its allocated storage space size, or on the number of its interval fragments. A new built-in Scheduler task runs periodically to enforce the purge policies on qualifying rolling window tables.

Interval fragment clause
Enhanced CREATE TABLE and ALTER FRAGMENT interval fragment syntax

For tables that are fragmented by RANGE INTERVAL, you can create an SPL routine to return a dbspace name. If this UDF is a function expression after the STORE IN keywords of the interval fragment clause, the database server calls the UDF when it creates new interval fragments for the table. As an alternative to a list of dbspaces, this syntax is valid for interval fragments of rolling window tables, or for interval fragments of tables with no purge policy.

Interval fragment clause

MODIFY Clause

Improve space utilization by compressing, repacking, and shrinking B-tree indexes

You can use SQL administration API commands or CREATE INDEX statements to save disk space by compressing B-tree indexes. You can also use SQL administration API commands to consolidate free space in a B-tree index, return this free space to the dbspace, and estimate the amount of space that is saved by compressing the indexes.

CREATE INDEX statement

COMPRESSED option for indexes

Save disk space by enabling automatic data compression

You can use the COMPRESSED keyword with the CREATE TABLE statement to enable the automatic compression of large amounts of in-row data when the data is loaded into a table or table fragment. Then, when 2,000 or more rows of data are loaded, the database server automatically creates a compression dictionary and compresses the new data rows that are inserted into the table.

Also, when you run SQL administration API create dictionary and compress commands on existing tables and fragments, you enable the automatic compression of subsequent data loads that contain 2,000 or more rows of data. If you run an uncompress command, you disable automatic compression.

In addition to saving space, automatic compression saves time because you do not have to compress the data after you load it.

Storage options

COMPRESSED option for tables

OLAP windowed aggregate functions

This release introduces support for online analytical processing (OLAP) functions to provide ordinal ranking, row numbering, and aggregate information for subsets of the qualifying rows that a query returns. These subsets are based on partitioning the data by one or more column values. You can use OLAP specifications to define moving windows within a partition for examining dimensions of the data, and identifying patterns, trends, and exceptions within data sets. You can define window frames as ranges of column values, or by position relative to the current row in the window partition.

If you invoke the built-in aggregate function COUNT, SUM, AVG, MIN, MAX, STDEV, VARIANCE, or RANGE from an OLAP window, the return value is based on only the records within that window, rather than on the entire result set of the query. In nested queries, the set of rows to which each OLAP function or aggregate is applied is the result set of the query block that includes the OLAP function.

OLAP window expressions
Enhanced control over how the ORDER BY query results sort null values

When you are sorting ORDER BY query results, you can use the new keyword options of NULLS FIRST or NULLS LAST to specify the result set order of rows that have a null sort key value.

Earlier Informix releases treated null values differently:
  • Put null values last if the ORDER BY clause specified DESC for a descending order
  • Put null values first if DESC was omitted, or if the ORDER BY clause explicitly specified ASC for an ascending order
Ascending and Descending Orders
SQL DISTINCT expressions as arguments to COUNT, SUM, AVG, and STDDEV aggregates

In earlier releases, queries can call the built-in COUNT, SUM, AVG, and STDDEV functions in a column or column expression. This release extends the domain of COUNT, SUM, AVG, STDDEV arguments to SQL DISTINCT expressions, including CASE expressions.

AVG Function

CASE Expressions

SUM Function

OLAP window aggregate functions

Multiple DISTINCT aggregate functions in a query

You can now include multiple aggregate functions that return DISTINCT values in a query. For example, you can include multiple COUNT(DISTINCT) specifications in a single query instead of writing a separate query for each COUNT aggregate.

Controlling duplicate return values

Including or excluding duplicates in the result set

Faster ANSI join queries

ANSI outer join queries that have equality joins can run faster because the Informix optimizer now uses either a hash join or a nested loop on a cost basis. In earlier releases, Informix used only nested loop joins in ANSI outer joins.

 
Grid queries for consolidating data from multiple grid servers

You can write a grid query to select data from multiple servers in a grid. Use the new GRID clause in the SELECT statement to specify the servers on which to run the query. After the query is run, the results that are returned from each of the servers are consolidated.

GRID clause
Enhanced SQL statements to store query results in permanent tables

You can store query results in permanent tables when you create tables in your database and when you retrieve data from tables.

Use the CREATE TABLE AS ...SELECT FROM statement to define a permanent database table that multiple sessions can access. In previous releases, you had to use the CREATE TABLE statement to create empty tables, and then use the INSERT INTO...SELECT FROM... statements to store data in the tables.

Use the SELECT...INTO statement to store query results in a permanent table. In previous releases, SELECT statements stored query results only in temporary or external table objects.

AS SELECT clause

INTO STANDARD and INTO RAW Clauses

The IBM® Informix SPL language now includes the CASE statement

SPL routines now can use the CASE statement as a faster alternative to IF statements to define a set of conditional logical branches, which are based on the value of an expression. This syntax can simplify migration to Informix of SPL applications that were written for Informix Extended Parallel Server or for other database servers.

CASE
Enhanced support for OUT and INOUT parameters in SPL routines

SPL user-defined routines and C user-defined routines with OUT or INOUT arguments can be invoked from other SPL routines. The OUT and INOUT return values can be processed as statement-local variables or as local SPL variables of SQL data types. The SPL routines that are invoked from SPL routines support all data types except BYTE, TEXT, BIGSERIAL, SERIAL, and SERIAL8. The C routines that are invoked from SPL routines support all data types except BYTE, TEXT, BIGSERIAL, SERIAL, SERIAL8, and ROW.

Specifying INOUT Parameters for a User-Defined Routine

User-Defined Functions

CASE expressions in the ORDER BY clause of SELECT statements

The ORDER BY clause now can include a CASE expression immediately following the ORDER BY keywords. This expression allows sorting key specifications for query result sets to be based on evaluating multiple logical conditions.

Ordering by a CASE expression
INTERSECT and MINUS set operators for combined query results
You can use the INTERSECT and MINUS set operators to combine two queries. Like UNION or UNION ALL set operators, these set operators combine the result sets of two queries that are their left and right operands.
  • INTERSECT returns only the distinct rows that are in both the left and the right query results.
  • MINUS returns only the distinct rows from the left query result that are not in the right query result.
You can use EXCEPT as a keyword synonym for the MINUS set operator.
Set operators in combined queries

INTERSECT Operator

MINUS operator

Simplified CREATE EXTERNAL TABLE syntax for loading and unloading data

Data files from external sources with special delimiter characters can be loaded and unloaded more easily than in previous releases. Use the CREATE EXTERNAL TABLE statement, with or without the ESCAPE ON keywords, and specify the DELIMITER keyword. The database server inserts the escape character immediately before any delimiter character that occurs in the data. You can use the ESCAPE OFF keywords to improve performance if the data does not contain special delimiter characters.

Table options