IBM Support

What's New in IBM Db2 Warehouse on Cloud, IBM Db2 Warehouse, and IBM Db2 on Cloud

News


Abstract

The current releases of IBM® Db2® Warehouse on Cloud, IBM Db2 Warehouse, and IBM Db2 on Cloud support the following new features and functions. Any changed or deprecated functions that require changes on the client side are also noted here.

Content


November 22, 2017



Db2 Warehouse updates

v2.2 now available

Db2 Warehouse v2.2 contains the following enhancements:

New Federation data sources in the console

MySQL and PostgreSQL are now available in the Db2 Warehouse web console as data sources for Federation.




November 17, 2017



Db2 Warehouse on Cloud updates
In 1Q 2018, there will be a change in how constraints are enforced in Db2 Warehouse on Cloud. For more information, see the following technote.



November 9, 2017



Db2 Warehouse on Cloud updates

Compatibility

This update includes the following compatibility updates:
  • Enhanced NZPL SQL support, including support for %ROWTYPE, %TYPE, FOUND, ROW_COUNT, REFTABLE , AUTOCOMMIT and SELECT INTO record type.
  • External Table extended with new options to support more native Db2 Warehouse file formats including CCSID, TIMESTAMP_FORMAT, DATE_FORMAT, and TIME_FORMAT options. Customers unloading and reloading data from Db2 Warehouse are recommended to use CCSID 1208 over ENCODING(INTERNAL) to maintain the data in unicode.
  • HAVING CLAUSE now supports column aliases under SQL_COMPAT='NPS'





October 27, 2017



Db2 Warehouse updates

v2.1 now available

Db2 Warehouse v2.1 contains the following enhancements:


z Systems support

You can now deploy Db2 Warehouse on Linux on IBM z Systems hardware. Together, Linux and z Systems hardware provide outstanding data security, high availability, and superior performance. For deployment prerequisites, see IBM Db2 Warehouse prerequisites (Linux on IBM z Systems hardware).

Simplified registration


It will be easier to gain access to Db2 Warehouse containers for many platforms because in early November, the containers will be available in Docker Store. To obtain access:
1. Obtain a Docker ID.
2. Log in to Docker Store.
3. Search for the relevant container: IBM Db2 Warehouse (image container for Linux), IBM Db2 Warehouse Developer-C for Non-Production (image container for Windows and Mac), IBM Db2 Warehouse client container, or IBM Db2 Warehouse sample data container.
4. In the search results, click the box for the relevant container.
5. Click Proceed to Checkout.
6. Complete your contact information, agree to the terms and conditions, and click Get Content.

Improved web console usability


The Db2 Warehouse web console has been redesigned for better usability:

  • The home page now provides a summary of hardware issues, software issues, database alerts, and storage usage.
  • To get key metrics about database activity, you can use the new Monitor > Dashboard option.
  • To get quick access to key console options for your role, you can use the new menu in the upper right corner of the console (click the person icon). This menu includes an About option that provides Docker image information.
  • To walk through the options in the console navigation menu and learn what the options do, click the new Discover button in the upper right corner on the console.
  • To download scripts to move files to the cloud, for later loading into Db2 Warehouse, use the Connect > Download Tools option.
Livy server job scheduler for Spark

When you deploy the Db2 Warehouse image container, a Livy server is automatically installed and configured for you. You can submit Spark applications from a client system to a Spark cluster running alongside Db2 Warehouse, inside the same container. You can use the new docker exec -it Db2wh livy-server command to start or stop the Livy server or obtain the status of the Livy server. For more information, see Submitting Spark applications through a Livy server and livy-server command for IBM Db2 Warehouse.

Additional operating support for POWER LE hardware

The Red Hat Enterprise Linux operating system is now supported for Db2 Warehouse on POWER LE hardware.

Enhanced diagnostic information

You can use the new dbdiag command to collect diagnostic data for components of your Db2 Warehouse implementation. You can selectively collect data according to the component type, problem symptom, and node. For more information, see dbdiag command for IBM Db2 Warehouse.

Production-level support for recently introduced configuration options

The TABLE_ORG, DB_PAGE_SIZE, DB_COLLATION, DB_TERRITORY, and DB_CODESET configuration options, which were introduced as a technical preview in Db2 Warehouse 2.0, are now supported for production environments, through the Db2 Warehouse image container. For information about these options, see Configuration options for the IBM Db2 Warehouse image.

Compatibility changes

For information about new compatibility changes, see the November 9, 2017 entry for Db2 Warehouse on Cloud.


In addition, the container names have changed. For details, see IBM Db2 Warehouse containers. In addition, Db2wh is now used as the container name in the commands in the documentation.




October 24, 2017



Db2 Warehouse updates

Deployment through IBM Cloud Private

You can now deploy version 2.0 of Db2 Warehouse and Db2 Warehouse Developer-C for Non-Production on IBM Cloud Private, an application platform for developing and managing on-premises, containerized applications. IBM Cloud Private is an integrated environment that includes Kubernetes, a private image repository, a management console, and monitoring frameworks. IBM Cloud Private does not support Db2 Warehouse MPP deployments. For more information, see IBM Db2 Warehouse.




September 29, 2017



Db2 Warehouse updates

v2.0 now available

Db2 Warehouse v2.0 contains the following enhancements:

New ways to customize your database for your workloads

TABLE_ORG and DB_PAGE_SIZE options are now available for the -e parameter, which you can specify for the docker run command. The TABLE_ORG option specifies whether tables use column-organized storage (which is best for analytic workloads) or row-organized storage (which is best for OLTP workloads). These options are available as a technical preview in the image container. For more information, see Configuration options for the IBM Db2 Warehouse image.

Administration improvements


You can now select a role and grant privileges to it or revoke privileges from it by clicking Administer > Privileges. In addition, the pages that are displayed when you click Settings > Users and Privileges and click Settings > My Profile have been redesigned.

Enhanced health information

You can now use the docker exec -it dashDB dashdbhealth command to check the health of various aspects of your Db2 Warehouse implementation, on one or multiple nodes. This command can be very useful in helping to diagnose problems. For more information, see dashdbhealth command for IBM Db2 Warehouse.

Additional SSL support

The db_catalog command, which catalogs a remote Db2 Warehouse database for use with the tools in the Db2 Warehouse client container, now supports the --ssl parameter. You can use this parameter with the --add parameter to catalog the remote database with SSL support. You can then run Db2 CLP commands and scripts over SSL. For more information, see db_catalog command for IBM Db2 Warehouse.

Changes to constraint enforcement

By default, the NOT ENFORCED parameter applies to constraints for tables that you create in Db2 Warehouse 2.0 or later. Because the database manager does not enforce uniqueness by default for new tables, incorrect or unexpected results can occur if the table data violates the not-enforced constraint. If you want to enforce uniqueness, specify the ENFORCED parameter when you create or alter unique or referential constraints (such as primary key and foreign key). The change in default behavior aligns with best practices for warehouse workloads and improves ingest performance.



Also, you can tailor your database for your location by specifying the DB_CODESET, DB_COLLATION_SEQUENCE, and DB_TERRITORY options for the -e parameter of the docker run command. These options are available as a technical preview in the 1.11.2 experimental container. For more information, see Configuration options for the IBM Db2 Warehouse image.




September 8, 2017



Db2 Warehouse on Cloud (formerly, dashDB for Analytics) updates

External tables

External tables are now supported. For information about how to create them, see CREATE EXTERNAL TABLES statement.

INSERT performance improvements

Parallel inserts, vectorized inserts, and reduced logging are now enabled. Parallel inserts, vectorized inserts, and reduced logging can help improve performance. You can now complete large CTAS or INSERT from SELECT operations without using the LOAD from CURSOR syntax.

Federation

Federation (formerly called "fluid query") capability is now available for Db2-based sources over non-SSL connections. You can use federation capability to access data that is located on a data source that is different from the one to which you submitted a query. Supported data sources include IBM Db2, IBM Db2 Warehouse on Cloud, IBM Db2 Warehouse, and IBM Big SQL.




August 25, 2017



Db2 Warehouse (formerly, dashDB Local) updates

v1.11.1 now available

The new Db2 Warehouse v1.11.1 release contains fixes that are important for the proper functioning of the product, so you should update your deployment as soon as possible. For instructions, see Updating Db2 Warehouse.

Also, see the webcasts.




August 8, 2017



Db2 Warehouse (formerly, dashDB Local) updates

External tables are now supported. For information about how to create them, see the CREATE EXTERNAL TABLES statement topic or the demo, which also shows how to load data and check row counts.




August 2, 2017



Db2 Warehouse (formerly, dashDB Local) updates

A "what's new" webcast for the June release is now available.





July 28, 2017



Db2 Warehouse (formerly, dashDB Local) updates

v1.11.0 now available

Db2 Warehouse v1.11.0 contains fixes and the following enhancements and changes.

Enhancements and changes to deployment and related tasks

Deploying Db2 Warehouse and performing related tasks, such as updating and scaling Db2 Warehouse, are now simpler because you don't have to issue the docker exec -it dashDB start command. Some other minor changes have also been made.

Enhancement to HA

After a head node failover, if the original head node becomes reachable again, restarting the system causes the original head node to become the current head node again.

Additional schema privileges

You can now grant or revoke the following new schema privileges by using the Db2 Warehouse web console: ACCESSCRTL, DATAACCESS, DELETEIN, EXECUTEIN, INSERTIN, LOAD, SCHEMAADM, SELECTIN, and UPDATEIN. To grant or revoke schema privileges, click Administer > Schemas, select a schema, and click Privileges.

Reporting of CPU information

To help you monitor usage, the get_system_info command and the Settings page in the Db2 Warehouse web console now report the numbers of CPU cores, in addition to the numbers of physical CPUs.

Docker engine and storage driver support

Db2 Warehouse now supports Docker engine version 1.12.6 or higher, rather than just 1.12.6. This support applies to the CE and EE Docker engines that are supported by Docker and by Ubuntu (docker.io). Also, the devicemapper storage driver is now required for only CentOS and RHEL, not for all operating systems.

Performance improvements

Parallel inserts are now enabled in Db2 Warehouse, except when you are using HADR. Vectorized inserts and reduced logging are also now enabled. Parallel inserts, vectorized inserts, and reduced logging can help improve performance.

Change to the criteria for determining the maximum number of nodes

If you deploy Db2 Warehouse v1.11.0 with 7.68 TB or less of cluster total RAM, 24 data partitions are allocated. The maximum number of nodes when you deploy or scale out is therefore 24. If you deploy dashDB Local v1.11.0 with more than 7.68 TB of cluster total RAM, 60 data partitions are allocated. The maximum number of nodes when you deploy or scale out is therefore 60.

db_migrate command

The db_migrate command that is available in the Db2 Warehouse client container and image container now contains the functionality of the db_migrate_preview command, such as the loader load| exttab parameter.




July 26, 2017



Db2 Warehouse on Cloud (formerly, dashDB for Analytics) and Db2 on Cloud (formerly, dashDB for Transactions) updates

A new console is now available with a more responsive design, a simplified user flow, and enhanced database usage monitoring. You can programmatically use all of the new web console functionality through REST APIs. For more information, see the following blog post.




July 18, 2017



New offering names

The following table summarizes the new offering names:

Previous name New name Effective date
IBM dashDB for Analytics IBM Db2 Warehouse on Cloud July 18, 2017
IBM dashDB Local IBM Db2 Warehouse July 18, 2017
IBM dashDB for Transactions IBM Db2 on Cloud June 20, 2017

Also, the dashDB Local product for Windows and Macintosh platforms is now called IBM Db2 Warehouse Developer-C for Non-Production. It is now available at no charge, with a non-expiring license.





June 20, 2017



dashDB for Transactions has a new name and includes a new plan

Effective June 20th, IBM dashDB for Transactions was renamed IBM Db2 on Cloud. For more information, see the following blog post. In the coming weeks, you'll notice the new name when you open the console and in the product documentation.

The new Flex plan allows you to scale CPU/RAM and storage independently, on demand. Scaling is easy... just use the slider-bars in the console. For more information, click here.




May 30, 2017



dashDB Local updates

v1.9.0 now available

The dashDB Local v1.9.0 product contains fixes and the following enhancements. For more information, see the "what's new" June webcast.

External LDAP

In previous releases, dashDB Local always used a self-contained LDAP server for authentication and authorization. You now have the option of configuring dashDB Local to act as a client to an external LDAP server by using either the new configure_ldap command or the new Settings > External LDAP option in the dashDB Local web console. You can also monitor the health of your external LDAP server by using the web console.

get_webconsole_url command

You can use the new get_webconsole_url command to display the IP address and port number of the host where you deployed the dashDB Local image.

License monitoring

If your dashDB trial license is within 7 days of expiring or has expired, a message is now displayed in the dashDB Local web console. Also, the status, version, and start commands also now display a license information banner that shows the license type, the license state, the expiry date, and the number of days before expiration of a trial license.

dashDB Local web console

The style and color of the dashDB Local console have changed somewhat to be more consistent with the look and feel of other tools from other IBM Analytics products.


Experimental container

The new dashDB experimental container provides preliminary versions of new and enhanced advanced features that are planned for a future official dashDB product. The features are external tables, reduced logging, vectorized inserts, workload management (WLM), and parallel insert, update, and delete (parallel IUD). These features are available for your preview and evaluation; they have not been fully tested, so do not use them in a production environment.

To deploy the container, follow the instructions in Deploying dashDB Local (Linux), but use one of the following repository and tag combinations:
  • For the Ubuntu operating system on POWER LE hardware: ibmdashdb/preview:v1.9.0-experimental-ppcle
  • For Linux operating systems on other hardware: ibmdashdb/preview:v1.9.0-experimental-linux

For information about how to use the features in the experimental container, contact your IBM Support representative.




May 2, 2017



dashDB for Analytics and dashDB for Transactions updates

Compatibility
    With dashDB for Analytics, you can now use the Boolean data type for all row-organized and column-organized dashDB tables. You can implicitly or explicitly cast values from Boolean data types to integer or string data types and vice versa. You can also carry out "group by" and "order by" operations on Boolean columns. Also, dashDB offers a set of new scalar functions, each of which returns a Boolean value based on the truth value of an input expression. Boolean support is not currently available with dashDB for Transactions.

Security updates
    Updates include enhancements to security requirements for cipher specs of applications that access dashDB. If your application cannot connect to dashDB, complete the steps in the following technote.




April 28, 2017



dashDB Local updates

v1.8.0 now available

The dashDB Local v1.8.0 product contains fixes and the following enhancements. For more information, see the "what's new" May webcast.

Parallel inserts

You can now enable parallel inserts into the dashDB Local database by using the docker exec -it dashDB parallel-enab le .sh command. By default, parallel inserts are disabled; if you enable them, you can disable them by using the docker exec -it dashDB parallel-disabl e .sh command. Both commands are currently available for technical preview only. Performing parallel inserts can increase the log space requirement.

For more information about the commands, see parallel-enable command for dashDB Local and parallel-disable command for dashDB Local.

Fluid Query

Fluid Query now supports Oracle as a data source on both POWER LE and Intel hardware. In addition, the Microsoft SQL Server, Apache Hive, Cloudera Impala, and Netezza data sources, which were previously supported on only Intel hardware, are now also supported on POWER LE hardware.



March 31, 2017



dashDB Local updates

v1.7.0 now available

The dashDB Local v1.7.0 product contains fixes and the following enhancements. For more information, see the "what's new" April webcast.

Monitoring

In the dashDB Local web console, you can use the Monitor > Systems option, followed by the Software tab, to obtain the following information about database health:
  • The overall database status
  • Whether the database is in write pending state
  • The number of tables in reorg pending state
  • The number of tables in load pending state
  • The number of unavailable tables

Numbers of data partitions and nodes

If you have at least 960 GB of cluster total RAM when you deploy dashDB Local 1.7, 60 data partitions are allocated. (In previous releases, 24 partitions were allocated, regardless of how much cluster total RAM you had.) You can therefore now deploy or scale out to 60 nodes, and the higher number of data partitions can help improve performance. If you deploy dashDB Local 1.7 and have less than 960 GB of cluster total RAM when you deploy, 24 data partitions are allocated, and the maximum number of nodes when you deploy or scale out is therefore 24. Even if you increase the cluster total RAM to at least 960 GB after deployment, you cannot scale out to more than 24 nodes.
High availability disaster recovery (HADR) for SMP deployments

In a dashDB Local SMP deployment, you can set up HADR with one primary node and one standby (disaster recovery) node. HADR in SMP deployments does not use automatic failover and failback. If an unplanned outage occurs, you must instruct the standby node to take over as the new primary node, resolve the issue on the new standby node (the old primary node), and then instruct the new standby node to become the primary node again. To help you to set up and manage HADR in an SMP deployment, the setup_hadr and manage_hadr commands and the -e HADR_ENABLED='YES' parameter for the docker run command are now available. For more information, see High availability and disaster recovery for dashDB Local.

Azure
For documentation and a video on deploying dashDB Local on the Microsoft Azure cloud computing platform, see Deploying dashDB Local on Microsoft Azure and Tutorial: Deploy dashDB Local on Microsoft Azure.

Sample data

A sample data container for dashDB Local is now available. You can use this container, which is separate from the one that contains the dashDB Local product image, to load sample data into your BLUDB database. For instructions, see Loading sample data for dashDB Local.

Integrated ODBC driver for fluid queries

The dashDB Local product now contains an integrated , pre-configured ODBC driver for use with fluid queries. This lets you directly access remote data sources such as Hive, Impala, Spark, Netezza, and SQL Server without having to download, install, and configure the driver yourself.


 

February 27, 2017



dashDB Local updates

v1.6.0 now available

The dashDB Local v1.6.0 product contains fixes and the following enhancements:
  • The dashDB Local web console has been enhanced as follows:
    • You can use the new Administer > Workloads option to view, create, and drop database workloads.
    • Remote Tables (Federation) now supports the following additional data sources, except on POWER LE hardware: Apache Hive, Cloudera Impala, and Netezza.
  • You can use dashDB Local with the HAProxy load balancer. For information, see Setting up the HAProxy load balancer for dashDB Local.
  • Fluid Query is now officially supported. You can use the Fluid Query feature to access data that is located at a data source that is different from the one against which you submitted a query.
  • Documentation and a video on deploying dashDB Local on Amazon Web Services are now available. See Deploying dashDB Local on Amazon Web Services and Tutorial: Deploy dashDB Local on Amazon Web Services.

For instructions on how to update your system, see Updating dashDB Local.


 

January 30, 2017



dashDB Local updates

v1.5.0 now available

The dashDB Local v1.5.0 product contains fixes and the following enhancements:

  • The dashDB Local web console has been improved:
    • When you perform an action for a database object such as a table by using the Administer option, clicking Run causes the action to be performed immediately, without opening the SQL editor.
    • The Administer option now provides an easier way to manage object privileges.
    • The Remote Tables (Fluid Query) option now supports the Microsoft SQL Server (MSSQL) data source.
  • Two new stored procedures based on Apache Spark are now available:
    • A generalized linear model (GLM) stored procedure. GLM handles heavy-tailed distributions and nominal (discrete-valued) distributions.
    • A TwoStep clustering stored procedure. TwoStep clustering is a data mining algorithm for large data sets. It is faster than traditional methods because it typically scans a data set only once before it saves the data to a clustering feature tree.
  • Changing the default configuration, such as for the Oracle compatibility mode, is now simpler. Instead of using the /mnt/clusterfs/options file, you now specify the -e <configuration_value>=<option> parameter for the docker run command.

For instructions on how to update your system, see Updating dashDB Local.


 


December 30, 2016



dashDB Local updates

v1.4.1 now available

The dashDB Local v1.4.1 product contains fixes to enhance the stability of dashDB Local. For instructions on how to update your system, see Updating dashDB Local.


 

December 14, 2016



dashDB for Analytics and dashDB for Transactions updates

Driver package for POWER LE hardware
    The PowerLinux (ppc64le) dashDB driver package is now available for POWER LE hardware. You can download the PowerLinux driver package from the dashDB console by clicking Connect > Download Tools. For instructions on installing the driver package, see dashDB driver package.

Enhanced password policy
    For greater security, the password restrictions to access dashDB have been changed as follows.

    Previous password policy New password policy
    - Minimum seven characters
    - At least one letter and one number or special character.
    - Minimum twelve characters
    - At least one uppercase letter, one lowercase letter, one number, and one special character.

    Current passwords do not need to be changed, but if an administrator adds a new user or changes a current user's password, the new policy applies. The new policy also applies if users change their password on the profile page.

Logins are disabled after five failed log-in attempts
    If you attempt to log into dashDB five times with an incorrect password, the account will be locked. You can try again in 30 minutes. If you are on a plan other than an Entry plan, you can ask your administrator to unlock your account. The credentials for Entry plan users are on the Connect > Connection Information page.


 

November 25, 2016



dashDB Local updates

v1.4.0 now available

Update to the latest version of dashDB Local to take advantage of the following enhancements. For instructions, see Updating dashDB Local.


Driver package for POWER LE hardware

The PowerLinux (ppc64le) dashDB driver package is now available for POWER LE hardware. You can download the PowerLinux driver package from the dashDB Local console by clicking Connect > Download Tools. For instructions on installing the driver package, see dashDB driver package.


Security
    You can now use the dashDB REST API to create LDAP users for the dashDB Local database.

    You can use the rotate_db_master_key command to change the values of the master key. The master key is used to encrypt the data encryption key.


    dashDB Local console 

    "Quick tours" have been added to the home page and to the Load, Administer, and Run SQL pages. Quick tours walk you through the features of the console.

    You can use a new button on the Spark Analytics page to open the Spark application monitoring page.

    If you click Monitor > Workloads, the Overview page shows the database time breakdown by database or by workload. You can drill down to see the top resource consumers.

    Monitoring of the history of database utility execution is now supported.

    On the Run SQL page, you can insert and replace multiple scripts at the same time.

    You can use the Privilege button on the Nickname Explorer page and the Manage Servers page to grant access to nicknames and remote servers.

    If you attempt to log in to the dashDB Local console six times with an incorrect password, the account will be locked. You can try again in 30 minutes, or you can ask your administrator to unlock your account.

      
    Swarm

    The dashDB_local_Swarm_install.sh script now supports Docker v1.12.x and the new -p ( --port) option, which specifies the port number.


    Apache Spark

    Apache Spark, which is integrated into dashDB Local, has been upgraded from 1.6 to 2.0.

    You can now use Spark with R. You can launch and run a SparkR batch application by using the spark-submit.sh script, the IDAX_SPARK_SUBMIT stored procedure, or the REST API.

    You can now use socket communication for a more efficient local data transfer when reading data from dashDB tables into Spark. This option is especially helpful for large tables. You can specify this option when reading the data frame.

    A new data source parameter is available. You can now use an append mode to write small amounts of data, repeatedly if necessary, into an existing table.

    A new compound self-service demo notebook for dashDB with Spark is available at https://github.com/ibmdbanalytics/dashdb_analytic_tools/blob/master/dashdblocal_notebooks/Tornado%20Clustering.ipynb.


     

    November 23, 2016



    dashDB for Analytics updates

    Load
      Aspera, which was previously available under a technical preview, is now fully supported. There is no longer a data cap on non-Entry plans. The data cap for the Entry plan remains 400 MB.

    Console
      The console now includes a product tour that covers the Home, Load Hub, Tables, and Run SQL pages.
      The top-level console menu now includes options for Connect (formerly called Connection Information) and Downloads.



    November 22, 2016



    We have simplified the dashDB for Analytics plan names. These plans have the same features and benefits. Only the names are changed.

    Old name New name Description
    Entry IBM dashDB for Analytics Entry No charge for up to 1 GB of data storage.
    20 GB maximum data storage.
    One dedicated schema per service instance on a shared server.
    Enterprise 64.1 IBM dashDB for Analytics SMP Small  Dedicated instance with 64 GB RAM.
    Enterprise 256.4 IBM dashDB for Analytics SMP Medium  Dedicated instance with 256 GB RAM.
    Enterprise 256.12 IBM dashDB for Analytics SMP Large  Dedicated instance with 256 GB RAM designed for storage dense applications.   
    Enterprise MPP.4 IBM dashDB for Analytics MPP Small  Dedicated instance of multiple node cluster. 
    Enterprise MPP 32.244.1400 for AWS IBM dashDB for Analytics MPP Small for AWS Dedicated instance with 244 GB RAM.

    You can order all of the plans listed above with your credit card or with Bluemix subscription billing, with the exception of IBM dashDB for Analytics MPP Small for AWS.  For the IBM dashDB for Analytics MPP Small for AWS plan, contact IBM Sales.

    For complete details, visit the IBM Bluemix Catalog.


     

    October 28, 2016



    dashDB Local updates

    v1.3.0 now available
      Update to the latest version of dashDB Local to take advantage of the following enhancements. For instructions on how to update, see here.

    Container packaging and delivery
      As well as being able to use the docker run and pull commands with the dashDB Local image from Docker Hub, you can use the docker load command to directly run a new downloadable dashDB Local image. For information about how to download the stand-alone image, contact your IBM Support representative.

    Workload management (WLM)
      In dashDB Local 1.3.0, for newly created databases, WLM adaptive admission control is used to admit work to the system based on estimated and observed resource usage. However, fixed concurrency limits (for example, admit up to 10 queries), which were used in previous releases, remain in place for existing databases. Key benefits of adaptive admission control can include improved throughput and reduced errors due to concurrency.

    Monitoring
      The Activity event monitor with statement details is used to capture information about individual query executions. In the dashDB Local 1.3.0 console, if you click Monitor > Workloads and then click History and the Individual Executions tab, you can view the statements by group, such as by WLM workload or service class. Also, if you click Package Cache while in history mode, you can view the metrics for the statements that were captured by the Activity event monitor and information about the statements that used the most resources.

    POWER LE support
      Support for dashDB Local on POWER LE hardware is provided as a technical preview. For POWER LE hardware, the only supported operating systems for dashDB Local are Ubuntu 16.04 or later.  

    Fluid Query
      You can use the Administer > Remote Tables option in the dashDB Local console to define remote tables to be referenced by Fluid Query.


     

    October 4, 2016



    dashDB for Analytics updates

    Updated dashDB drivers
    Validate SQL syntax in the SQL editor as you type
      The Run SQL page has a new look and new options. You now have the choice to run a query for selected statements or to run the query from the cursor position.
      You can now validate the syntax as you enter the statements. To enable the validation, click Options, and turn on Real-time validation.

    SQL compatibility updates
    • For integer input, the SQL SUM function now returns a BIGINT result and the AVG function now returns a DECIMAL(31,6) result.
    • dashDB no longer enforces partitioning key overlap with non-enforced constraints.
    • When operating in NPS compatibility mode, the AS clause of a CREATE TABLE statement can use the same syntax as the corresponding clause of a Netezza CREATE TABLE AS (CTAS) command.
    • When operating in NPS compatibility mode, an expression can refer to column aliases that are set in the select list.



    September 30, 2016



    dashDB Local updates

    v1.2.0 now available
      Update to the latest version of dashDB Local to take advantage of the following enhancements. For instructions on how to update, see here.

    Integrated Apache Spark support
      Apache Spark, previously available as a technical preview, is now enabled by default.
      Apache Spark offers numerous advantages to users of dashDB Local, such as the ability to interactively transform, visualize, and analyze data, and to run highly scalable analytic applications. You can run Apache Spark applications that analyze data in a dashDB database and write their results to that database. You can also use Apache Spark to subscribe to streaming engines to process and land streaming data directly into dashDB tables.

    Develop Spark applications using Jupyter notebooks container
      You can use Jupyter notebooks to develop Spark applications interactively, then either deploy them to dashDB or export their source code for further development. Use the Docker image provided for dashDB Local to set up a Jupyter environment quickly and easily, which is ready to interact with dashDB's integrated Apache Spark.

    Deploy, run, and monitor Spark applications using the CLI, SQL, the IBM dashDB Analytics API, or web console
      A unique one-click-deployment function allows you to transform your interactive Jupyter notebooks into a deployed Spark applications inside dashDB. But you can also develop your own Spark applications with any other development tools and then deploy them into dashDB. You can run and manage deployed Spark applications either with the spark-submit.sh command line tool, a documented REST API, or using the SPARK_SUBMIT stored procedure that you can call from a database SQL connection. Spark applications can also be monitored using the dashDB web console.

    Run Spark-based machine learning routines
      For a defined set of popular machine learning problems you can use integrated stored procedures for training models, doing predictions and for managing stored models. These procedures internally leverage Apache Spark with machine learning libraries.
      For more information, see Analyzing with Spark on dashDB Local.

    Enhanced SQL editor
      You can now select from a list of predefined SQL statements, which you can use as templates for creating your own queries. Available statements include SELECT, INSERT, DELETE, and UPDATE. You can add your queries to a list of saved scripts and view script execution history that includes details of the success or failure of a script execution.

    Generate SELECT and INSERT statements directly from database object pages
      You can now generate SELECT and INSERT statements for objects, like tables, views, aliases, MQTs, and nicknames, directly from within the Administer window. Now, instead of jumping back and forth between the object page and the SQL Editor, you can simply edit out any unwanted properties from your generated SQL statement and run the query.

    Fluid Query now available in technical preview
      The Fluid Query feature lets you access data that is located at a data source that is different from the one to which you submitted a query.



    August 30, 2016



    dashDB Local v 1.1.0 updates

    v 1.1.0 now available
      Update to the latest version of dashDB Local to take advantage of the following enhancements. For instructions on how to update, see here.

    New tagging convention
      To ensure that our Kitematic users are getting the right image, we are now using the following convention for tagging our images:
      • Windows/Mac (using Kitematic): “latest” (but v1.0.0-kitematic works too)
      • Linux: “latest-linux”
      As a result, Windows and Mac users that are performing the initial deployment or updating can just click on the CREATE button and the “latest” image is automatically selected. For Linux users, you have to specify the “latest-linux” tag in you docker run or docker pull commands. The cut-and-pastable commands we provide reflect this change.

    Oracle compatibility
      You can specify that your dashDB Local database is to be created in Oracle compatibility mode, allowing you to run existing Oracle applications. For more information, see here.

    Monitoring enhancements
    • MPP tables now show their data distribution statistics.
    • You can now easily switch between real-time and historical monitoring.
    • A new time range slider makes it easier to zero in on periods of interest
    • Console response when switching between pages is greatly improved.

    Object management enhancements
      We’ve made it easier for you to create application objects, such as stored procedures, user-defined types, and user-defined functions. You can now create them within the Administer objects window and we’ll provide you with a template and instructions to help you along.
      • We’ve made it easier for you to grant and revoke privileges. You can now specify privileges for multiple users and multiple objects at the same time.
      • We’ve made some usability improvements around table altering operations, making it easier for you to add, update or delete columns. For example, as you add columns, we perform instant validation of the fields you enter.

    SQL editor enhancements
    • You can now save your existing SQL scripts as favorites for easy access later.
    • We’ve added support for find/replace for regular expressions.
    • We’ve added templates to help you build your SELECT, INSERT, UPDATE, and DELETE statements.

    Container portability
      You can now move your dashDB Local data from one cluster to a new cluster in just a few simple steps. This is supported in both SMP and MPP deployments.



    July 27, 2016



    dashDB Managed Service updates (dashDB for analytics, dashDB for transactions)

    Compatibility
      Manage data with dashDB Support Tools
      The IBM dashDB support tools package (also referred to as the dbtoolkit) contains a collection of scripts and utilities based on similar scripts delivered for the IBM PureData System for Analytics appliances and Netezza databases. You can use the dbtoolkit to migrate data from Netezza databases to dashDB, to connect to the dashDB database, and to run queries and reports on the migrated data.

      To learn more about the dbtoolkit and to download the package, go to this page.
    Download tools
      Database Conversion Workbench (DCW) is available on Mac and no longer requires a Data Studio plug-in
      Database Conversion Workbench 4.0.0 is available in the dashDB web console from Connect > Download Tools.

      For more information, see the release notes.
    Security
      dashDB managed service is HIPAA ready
      The IBM dashDB Enterprise 256.4, 256.12, and MPP.4 plans, when provisioned on IBM SoftLayer®, have implemented required controls commensurate with the Health Insurance Portability and Accountability Act of 1996 (HIPAA) Security and Privacy Rule requirements. These include appropriate administrative, physical, and technical safeguards required of Business Associates in 45 CFR Part 160 and Subparts A and C of Part 164.

      For information about how to order a HIPAA-ready dashDB system, contact your IBM Cloud Data Services sales representative or send email to dashDB_Info@wwpdl.vnet.ibm.com.




    July 26, 2016



    New plans
      New dashDB for Transactions plans allow high availability
      dashDB for Transactions (referred to below as dashDB Transactional) now offers high availability plans for both the 12.128.1400 and the 2.8.500 configurations. These plans are suitable for Online Transaction Processing (OLTP) workloads where co-located high availability of the database service is required. To purchase these plans, contact your IBM Cloud Data Services sales representative or send email to dashDB_Info@wwpdl.vnet.ibm.com.



    July 22, 2016



    New offering
      dashDB Local is here!

      dashDB Local is next-generation data warehousing and analytics technology for use in private clouds, virtual private clouds and other container-supported infrastructures. It is ideal when you must maintain control over data and applications, yet want cloud-like simplicity. For more information, see this page.

      Actionable items for preview participants

      Preview license expiration
      Per the terms of the dashDB Local preview program, your preview license will expire on 2016-08-01, at which time the preview version of your product will no longer work. You will have to re-register for the generally available (GA) product, after which you can have an additional 90 day trial license. Before those 90 days are up, you have to purchase and convert to a production license.

      No migration from preview version
      You cannot perform a version update from the preview product to the v1.0.0 (GA) version of dashDB Local.

      What's new and changed from the preview version

      Integrated Apache Spark support
      Use the integrated Apache Spark framework to run Spark applications, both interactively and in batch, for data exploration, data transformation, and machine learning.

      Scale up/down support
      You can now add or remove resources (CPU or memory) to the host servers in your dashDB Local deployment. When the services restart, any resource changes are detected and the database configuration is updated automatically.

      No data distributed to catalog partition
      To improve performance, dashDB Local now excludes the catalog partition (partition 0) and instead distributes the data among the other 23 partitions.

      Kitematic is now a separate image
      If you are deploying dashDB Local via Kitematic, you need to specify the v1.0.0-kitematic tag. Previously, you could use the default "latest", but that image will not work in the current release.

      Enhanced port checking
      We've added some additional steps to our prerequisite check scripts that detect whether all the ports required by the dashDB stack are open on the hosts' Linux firewall.

      Enhanced usage metrics
      The dashDB Settings panel in the web console now shows monthly aggregated vCPU usage metrics.

      Miscellaneous changes and improvements
      - Container O/S is now CentOS 7.2
      - Console look and feel is improved



    June 30, 2016



    New plans
      dashDB Enterprise MPP for AWS plan available
      With the introduction of dashDB Enterprise MPP for AWS, dashDB is now available for provisioning on two choices of infrastructure providers. The new plan is MPP-enabled, requiring a minimum of three nodes per instance while still providing a data warehouse service with dynamic, in-memory columnar technology, and in-database analytics integrated from Netezza. To purchase this plan, contact your IBM Cloud Data Services sales representative or send email to dashDB_Info@wwpdl.vnet.ibm.com.



    June 9, 2016



    Download tools
    Database
      Random method of distribution now available when creating new tables
      When you create a new table in a dashDB MPP database, you can now choose either a hash distribution or a random distribution If you choose a random distribution, data is distributed evenly across the system. For more information, see the Creating tables in a dashDB MPP database.



    Apr 12, 2016



    New plans
      dashDB can do transaction processing
      Online transaction processing (OLTP) is now available for dashDB under new plan options. The IBM dashDB Transactional plans provide a highly reliable database that is configured and optimized for OLTP workloads. The Enterprise Transactional 12.128.1400 OLTP plan is a dedicated, bare metal offering that provides 128 GB of RAM and up to 1.4 TB of data and active log storage. The Enterprise Transactional 2.8.500 plan is a virtual private node offering that provides 2 vCPUs, 8 GB of memory and up to 500 GB for data and log storage. For more information, see Transactional workloads.

      To purchase an IBM dashDB Transactional plan, contact your IBM Cloud Data Services sales representative or send an email to dashDB_Info@wwpdl.vnet.ibm.com.



    Mar 17, 2016

      Load
        Load data into dashDB from a mail-in drive
        If you have large amounts of data to load into dashDB, you can use the mail-in drive option to load your data directly into your dashDB instance. For assistance, contact your IBM Cloud Data Services sales rep or send email to dashDB_Info@wwpdl.vnet.ibm.com.

        For more information, see Loading data from a mail-in drive.

      High availability
        Node High Availability (HA) for the Enterprise MPP plan
        Node HA support has now been added to MPP deployments. If a node fails, Node HA will automatically detect and move resources off of the failed node and distribute them to other nodes and restart with no manual intervention required.



      Feb 17, 2016

        Connectors
          Access dashDB programmatically using a RESTful API
          Start using the IBM dashDB REST API to load and analyze your data without launching your browser. You can use the API to load delimited data in multiple .CSV, .XLS, and .XLSX files into dashDB, and monitor the status of load jobs if you’re loading a lot of data. You can also analyze data using your custom R scripts or ones you’ve created in the RStudio development environment integrated with dashDB.

          Find the API Reference and tutorials in the dashDB Learning Center.

        Security
          SOC 2 Type 1 compliance
          dashDB is now SOC 2 Type 1 compliant. For more information, see Security compliances.

          Row and column access control (RCAC)
          Row and column access control (RCAC) is an extra layer of data security for the dashDB database in the Enterprise plan. For more information, see Row and column access control (RCAC) overview.

        Document information

        More support for: IBM dashDB

        Software version: Version Independent

        Operating system(s): Platform Independent

        Reference #: 1961758

        Modified date: 08 August 2017


        Translate this page: