II13579: DB2 PERFORMANCE AND TUNING INFORMATIONAL APAR:
Search arguments: 5740xyr00 r610 r710 r810 Access Path Issues (Probably the single most common cause of CPU regression is changes to access paths.) 1) Patrick Bossman's optimizer overview presentation. http://www.ibm.com/support/docview.wss?rs=64&uid=swg27003900 2) RUNSTATS e-care doc. "How to collect frequency and cardinality statistics used for access path selection." http://www-1.ibm.com/servlet/support/manager?rs=64&rt=0&org=SW&d oc=1023318 2a) Minimum level of recommended RUNSTATS: RUNSTATS TABLESPACE TS_NAME TABLE(**ALL) INDEX (ALL KEYCARD) *Supplement this with frequency and multi-column cardinality stats as appropriate. You MUST understand your data layout. If you have data skew and/or column correlation issues, these need to be considered when coming up with a runstats methodology. Current, complete and accurate stats are the key to getting good, stable access paths! **Column statistics are necessary for all columns used as where clause predicates. Collecting column statistics on all columns on all tables can be prohibitively expensive to collect. Some customers accept the CPU cost to avoid human effort, others like to be specific. 2b) Link to DSTATS and DSTATS doc. (For V7 and below) (V8 includes DSTATS functionality in DB2 Runstats utility) http://www-1.ibm.com/support/docview.wss?rs=64&context=SSEPEK&q= DSTATS&uid=swg24001598&loc=en_US&cs=utf-8&lang=en+en DSTATS Article by Steve Bower: http://www.idug.org/member/journal/mar99/improving_db2.html 3) Patrick Bossman's white paper on stats collection strategies. Filter factor statistics collection strategies: http://www-1.ibm.com/support/docview.wss?rs=64&context=SSEPEK&q= statistics&uid=swg27004114&loc=en_US&cs=utf-8&lang=en 4) Understanding REOPT(VARS) - ecare doc, SAP OSS note on, etc. SAP OSS note #162034 - Allows DB2 to determine access paths based on literal values instead of parameter markers. SQL performance benefits of using REOPT(VARS). http://www-1.ibm.com/support/docview.wss?rs=64&context=SSEPEK&q= reopt%28vars%29&uid=swg21023304&loc=en_US&cs=utf-8&lang=en+en 5) Star Join white paper and presentation. http://www-3.ibm.com/software/data/db2/os390/techdocs/starjoin.p df http://www-1.ibm.com/support/docview.wss?rs=64&context=SSEPEK&q= star+join&uid=swg27002332&loc=en_US&cs=utf-8&lang=en+en 6) Run away queries coming in through DDF often get mistaken for system hang issues. See hung thread info. apar. (II06335) DDF hangs info. apars (II08215 + II11164). DB2PM TOP processing to identify the TOP consumers. (see DB2PM ref. manual for TOP processing info. ) 7) ALWAYS TEST FIRST. :-) If you are going to rebind, rebind to a test collection first and test. 8) With dynamic SQL, anytime you change catalog statistics, DB2 maintenance levels, z-parm settings, CP speed or number of engines, RID pool size, sort pool size or BP size your access paths are vulnerable to change. The exception to this would be dynamic SQL using optimization hints to "lock in" a particular access path. This places a huge premium on knowing your workload and data layout. You need to understand which access paths are the most vulnerable to change and why. Also, how do you make these vulnerable access paths more stable? Understand data skew, col. correlations and the stats needed to distinguish between good and bad access paths - access paths that may appear to optimizer to have similar costs if the appropriate stats are not available, but in reality have vastly different costs. How do you identify and respond to access path regressions? -START TRACE(PERFM) CLASS(30) IFCID(22,63) TDATA(CPU,COR,TRA) DEST(SMF/GTF) PLAN() AUTHID() to capture the mini-bind (access path) and SQL text. REOPT(VARS), optimization hints and query or catalog tweaks may all provide fast short term relief for access path regressions. Long term solutions usually involve collecting additional statistics to make the query in question less vulnerable to access path regressions. REOPT(VARS) or optimization hints may also be part of long term solutions. For dynamic SQL, environment testing (new release, maintenance, etc.) in a QA environment is the key way to catch possible performance regressions before they impact production. Static SQL could be protected from regression by NOT binding packages and plans. Dynamic SQL gets exposed to statistics and code changes immediately - so testing is critical. 9) Keep historical plan table data whenever practical. Bind with explain(YES). For dynamic SQL, you may want to keep explain data for problem queries and "heavy hitter" queries (queries that consume the most resources). 10) Optimization Hint quirks: a. Using optimization hints with QMF QMF uses different program name to execute and explain SQL. This can cause problems with optimization hints because the PROGNAME for the explain does not match the program name for the execution, so the optimization hint is not found. Solution: Set PROGNAME to DSQxFSQL b. Using optimization hints with the DB2 prepared statement cache (apar PQ89083 needs to be applied) c. Using optimization hints with ODBC COLLID set to 'DSNAOCLI' VERSION 'UQ44518' PROGNAME (depends on isolation level) DSNCLIxx where xx is the isolation level. Eg. DSNCLICS for isolation level CS. ============================================================== Standard DOC request for access path issues: See informational apar II11945 for file transferring instructions. Please provide us with... - full explain report including the query as below for both the "good" and "bad" access paths (if possible): EXPLAIN PLAN SET QUERYNO = value FOR problem SQL goes here; SELECT * FROM PLAN_TABLE WHERE QUERYNO = value ORDER BY TIMESTAMP, QUERYNO, QBLOCKNO, PLANNO, MIXOPSEQ; - the related DDL and catalog statistics so we may re-create the access paths. The recommended method is to use DB2 Visual Explain V8 for z/OS. The Service SQL function works for DB2 V6, DB2 V7, and DB2 V8 and is much easier than DB2PLI8. This is very easy if you already have DB2 Connect setup on a a client at your shop. http://www-306.ibm.com/software/data/db2/zos/osc/ve/index.html Go to TOOLS on the main menu and select SERVICE SQL. Enter the etr/pmr number, use SQL statement as input. Any views used can also be manually added to the list. The tool will generate appropriate DDL, STATS, ZPARM information. You can then click SEND FILES to FTP the files to IBM. Please send the tersed files to our FTP site and update the PMR when they are available. ================================================================ General Perf. Issues: 1) Make sure you are really comparing apples to apples. Isolate variables. Make sure your measurements are as clean as possible. Be aware of what things have changed between measurements you are comparing. 2) ALWAYS TEST FIRST. 3) ALWAYS TEST FIRST. 4) Keep historical trace data leading up to any system/workload/environment changes. Our chances of resolving your issue quickly are hugely better if you have historical trace data to be used as a baseline for comparison. 5) Always run with at least DB2 traces ACCTG CLASS(1,2,3) and STATS(1,3,4,5,6) - you can leave out stats class(5) if you are not using datasharing. 6) If you are experiencing a CPU increase for the same amount of work (same # of SQL, getpages, etc.), 90% of the time this is either an OEM monitor issue or a storage use issue. Try reducing the level of online monitoring or turning it off all together temporarily to see if this changes things. Links to all info. apars.. II10817 - storage use II12697 - OEM monitor issues 7) A high percentage of NOT ACCOUNTed for time can be caused by the following things documented in the DB2 Admin. Guide. section 126.96.36.199.1 "Major items on the report." Check your RMF reports for high CPU levels and/or high levels of paging activity. 8) See informational apar II12337 for an index of DB2 informational apars. 9) Timeout/Deadlock issues info. apar - II13697 10) More to come on DDF, WAS, Java, WLM, etc. Standard DOC request for general perf. issues: It varies too much depending on the symptoms. A good starting point is the raw DB2 trace data from the problem period (1 hour) and a matching set from the same period the prior day/week/version/maint. level (a "good" case) to use as a baseline for comparison. Our recommendation for DB2 traces to run with under normal conditions: -START TRACE(ACCTG) CLASS(1,2,3,7,8) (7 and 8 are optional) -START TRACE(STAT) CLASS(1,3,4,5,6) Set STATSINT z-parm to no more than 15 minutes. CST/RSU -> THIS IS A GREAT RESOURCE. PLEASE TAKE ADVANTAGE OF IT. CST RSU - consolidated service test and recommended service upgrades - http://www-1.ibm.com/servers/eserver/zseries/zos/servicetst/miss ion.html 11) When viewing Explain output for a query containing a top-level UNION or UNION ALL, beginning in V8 you will see a row in the PLAN_TABLE representing the top-level UNION/UNION ALL as query block 1. The QBLOCK_TYPE column for this row contains either 'UNION' or 'UNIONA', respectively. This new row is a result of a change in the way query blocks are processed for a top-level UNION/UNION ALL in V8.
Reported component name
PB LIB INFO ITE
Reported component ID
Last modified date
APAR is sysrouted FROM one or more of the following:
APAR is sysrouted TO one or more of the following: